Artengine Blog: Video Cache – Activating the Archive

This interview takes as a starting point the VIDEO CACHE project. Hogan’s research into defunct video art repositories online raises many questions about the ephemeral nature of digital culture, and the social/cultural parameters that frame the preservation of and access to such materials.
Video Cache

VIDEO CACHE is a research creation project emerging from Mél Hogan’s doctoral research (, in collaboration with Penny McCann, director of SAW Video in Ottawa, and Groupe intervention video (GIV) in Montreal.

VIDEO CACHE took place on November 24, 2010 at GIV. It was a public screening of ten works selected by McCann from the SAW Video Mediatheque collection, for which artists’ fees were paid by GIV. The Mediatheque is Canada’s first large-scale attempt to use the web as a ‘living archive’ –its server crashed in 2009 and the project has been offline since. VIDEO CACHE was also a month-long online exhibit ( showcasing these ten works, carefully documented and recontextualised for the web. The documentation for VIDEO CACHE remains online, and the event catalogue is available via print-on-demand (

On the one hand, VIDEO CACHE served to document the Mediatheque project by updating the context and addressing in a practical way what it means to ‘activate’ the online archive. On the other hand, it was and remains an entity onto itself. VIDEO CACHE has become an opportunity for Hogan to bring a creative dimension to documentation and to address loss: while it is the ‘cache’ that makes the Mediatheque’s traces visible and re-visit-able, it is the ‘crash’ that signals its ongoing (archival) value.

Mél Hogan is completing her PhD in the Joint Doctorate in Communication at Concordia University, Montreal. web: email: twitter: @mel_hogan

You’ve talked about there being a paradox in the way digital culture is created and shared and the way it is preserved. How do you think preservation, creation, and use should be interrelated in the digital realm?

I don’t know that the paradox needs to be resolved so much as it needs to be acknowledged and understood within digital preservation debates. In my work what stands out is that more attention needs to be paid to digital flows, to circulation, and to the interface and database that facilitate and mask distribution online. Preservation, as an idea and as an ideal, is transformed online, though for some reason, stating this is always a bit controversial.

In archives (traditionally) the emphasis has been on long-term preservation, which more often than not has meant rendering ‘originals’ inaccessible in the present as a means to protect or safeguard them for the future. Because archival discourse and practice have come a long way in the last decade to adapt to the continually changing technoscape, I don’t want to make it sound like the tension is between the traditional, as material/offline, and the new, as digital/online. I concentrate on the digital online as a complex realm when I study the archive, but obviously the discourses and ideas are shared with, if not borrowed from, years of traditional archival theory. I think it is almost impossible not to rely on these established ideas and systems, but at the same time, I think it is important to move beyond them and beyond comparisons between material/digital, offline/online, mainly because the foundational archival concepts—the original, the authentic, and the integral—are conceived of differently in the digital realm. So there is a need for a new basis, a point of analysis that is of the web. We need to start talking about iteration, versions, repetition, and flow…

I think preservation, creation, and use are already interrelated in the digital realm—and that the archival conundrum actually lies in the fact that these elements are difficult to distinguish from each other. I think, if anything, the digital realm will keep moving in the direction of embedding the archive into technologies of creation, dissemination, and display. So maybe the question is how do we conceive of preservation, creation, and use as distinct entities in the digital online realm—rather than interrelated—and if a distinction is no longer possible, what the implications are of that interrelatedness.

You said that in your work ‘more attention needs to be paid to digital flows, to circulation itself, and to the interface and database that facilitate and mask distribution online’. Can you talk a bit more about this and how you think the interrelation between the ‘front end’ and ‘back end’ of online systems informs our perception and use of the archive?

When I say digital flows need to be addressed, I’m talking about community as much as I’m taking about trajectory. It’s an idea I’ve been stuck on for a while but also have a hard time articulating. From reading Ann Cvetkovich, Wendy HK Chun, Josephine Bosma, Anjali Arondekar, Tess Takahashi and others, I’m reminded of the underlying communities—online and offline—the people with a need and compulsion to collect, so that later, something can be made sense of, revealed. The archive ultimately makes possible connections that are sometimes dangerous or undesirable within a particular time and place. My hunch is that while the web has the potential to highlight the connections between people and their documented pasts, and with unprecedented reach, it also risks amalgamating everything into a large undifferentiated database that completely overlooks and overwrites the affective and the unarchivable.

We pay a lot of attention to digital content as objects, albeit virtual, when really an important part of what distinguishes the digital from its material counterparts is, I think, its movement, circulation, flow… the way people share the digital as a space, and travel through that space. Digital stuff is easy to copy—much of what we do on the computer is a form of duplication—and as many artists, theorists, and archivists have pointed out, these copies can be identical to ‘originals.’ Copies are also non-rival in consumption, which has forced us to seriously reconsider value and to come up with alternative economies, which so far seem most successful when thought of as network-creation itself. The mapping out of content, including links between digital nodes, constitutes digital trajectories, and this leads me to question the potential for archival theories that could emerge out of focusing on digital flows and online circulation, rather than the content-centric view imposed onto the digital. I’d like to expand my current project into theories of the web as a mobile archive, or a transient archive—something that highlights the passage of content, but also the movement of creators. And in turn, this means thinking about localization in contrast to the shifting place and space of the virtual archive…

As for the relationship between the front end and the back end, I think that we literally interact with an interface without knowing much of what generates our experiences online beyond that top layer. This isn’t new or limited to the web—this is basically our relationship to most technologies—but in the last few years, separating content from style and function (or form), has been pushed by developers. This has been mainly because browsers display content differently, and the separation made accessibility standards possible, making it easy to quickly and efficiently change the look of the interface without affecting content. Ultimately, the idea was to have form follow function, that is, to have use determine the appearance. So if we can take that kind of approach into account for the online archive, we begin to see what ideals shape the possibilities of the web for preservation.

Video Cache

What role do you think video artists or other digital content creators should play in the preservation of their own work?

I think this a really hard question to answer, but I’m going to respond from a personal point of view, as someone who makes video… and I am fully aware that I might make archivists and distributors shudder. I’m really for online access in principle, though I understand that in practice, it takes time, know-how, money, resources, etc. I haven’t even bothered to upload most of my videos online, so this is an ideal, a philosophical position. But it’s an ideal by which Canadian video distributors have not yet been seduced, and probably will not adopt anytime soon. And I get this—I get that making decisions about large valuable collections is something to think about carefully because once work is posted online, it simultaneously belongs to nobody and everybody.

Part of what inspires me to launch works into cyberspace is the politics of community-based activism that were about getting stuff out, sharing, exchanging ideas. There was an urgency and purpose. And as the tools became increasingly accessible, video art was about countering the mainstream in terms of both representation and means of sharing. But now it seems like the web has taken access to another level, and this is again shifting the politics of video art.

A lot of the politics that came out of video are similar to what we hear now about the web—in terms of its democratizing potential—and yet, the more video becomes common, the more precious the distinctions between art and the vernacular seem to become.

I think that there has to be some sort of middle ground—I prefer to upload videos to my own server than to YouTube for example, whose terms of use aren’t OK with me, unless I make a project with social networks in mind at the onset. But more and more artists use YouTube because it is so ubiquitous and saves on bandwidth and space. I think that online has to be thought of as many things—many contexts. So for example, if a video is shown online because someone has reviewed it, interviewed the artist, or curated an event for the work, this should make distributors and artists very eager to upload video online as part of this context. I don’t really understand the tight hold on video in these cases.

The fact that a video can be posted and embedded in numerous online contexts does not generally appeal to video distributors in Canada, who would rather see works maintained and presented in controlled environments where issues of resolution, duration, format, storage, and so on, are all carefully calculated to maintain the scarcity model on which they rest. The idea is to keep video art out of the ‘clutter’ of vernacular video—away from YouTube or on a distinct channel within it—so as to retain a curatorial sensibility.

For the archivists reading this, I have to refer to Josephine Bosma’s idea about rethinking loss as the antithesis to preservation because it gives elegance to these ideas. She writes, “We may have a lot to gain from losing control over digital objects. We should consider the ability of some artists to embrace an inherent loss of control over their work less as a challenge to conservation, and more as an inspiration to a solution. […] Both openness to a vital context and openness in terms of physical, material and technological accessibility may well be the best way forward in the strategy of conserving art in the environment of new, networked media.” [1]

My personal idea of what role artists and content creators should play in the preservation of their own work or collections is aligned with Bosma, and others who believe that setting work free allows for unpredictable modes of fan-based archiving tactics to happen. If we think of preservation as a process to keep work ‘alive,’ I can’t think of a better system—even if it is highly unpredictable—than the web. Except, as pointed out by Lucas Hilderbrand, the trend towards online distribution may mean that collection habits change, making it more difficult to keep works than with VHS or DVD, for example. [2]

So for content creators, I think that the idea of preservation has to be disentangled from marketing strategies, which isn’t easy by any means. In fact, the question of how to monetize content on the web may be the question nobody can answer; this demands an unprecedented level of innovation from video distributors whose best move may in fact be to opt out of the online realm altogether or wait for the hype train to pass… if it ever does.

Video Cache

The VIDEO CACHE collaboration with SAW Video activated the archive by screening some of the works from the crashed Mediatheque repository. Re-presentation through emulation or other means is a preservation strategy often undertaken with technological art of many kinds. Did you see VIDEO CACHE in this light at all, as simultaneously documenting and preserving the works?

Yes, I see VIDEO CACHE as a documentation project, but perhaps more importantly as a means of highlighting the ways in which the politics of the archive—any archive—are a reflection of the social movement(s) from which they emerge, including art movement(s). Video art history is imbued with politics and counter-movements, and these shape the discourses surrounding the video art archive on the web.

I see it less as an attempt to preserve the work within a long-term strategy where the material objects (DVDs for example) are central to the project’s history, and more in terms of preservation-as-conversation, keeping the project ‘alive’ by way of continued dialogue. Rooted in a feminist methodology, I frame VIDEO CACHE as way of bringing to the forefront the people involved in the Mediatheque—as artists or web developers or both—and their understandings of the process and labour involved, along with how their memories shape the ideals of video art and of the archive. It’s important to remember that this all started in the early 2000s, long before YouTube and broadband internet. It’s also important to mention that this project was funded as an online archive—that concept made sense very early on somehow, in that the promise of the web for preservation was something to invest in seriously, backed by hundreds of thousands of dollars of government money.


In some ways, activating the archive through a collaboratively curated event serves to document it better than written documentation would on its own; this is research-creation. The VIDEO CACHE screening and the online exhibit preserve and regenerate the Mediatheque, but very differently.

Curating a programme for a screening makes sense when you are talking about video, but it also raises a slew of questions about this assumption, given that as an online archive the Mediatheque didn’t prioritize high quality copies for screening—it was about showcasing video art online. This is a point in video art’s history that demands a look inward rather than forward. It demands a reflection on the trajectory of video art from its activist roots and from is dissident voices against mainstream representation—by women, queers, people of colour, community activists, etc.—to the current place and value of these scarce collections in an art market.

The Mediatheque is a prime case study for an archive that functioned for and through the web and privileged wide access over long-term material preservation of the files. Whether flawed or visionary as an archival approach, VIDEO CACHE preserves this idea, the Mediatheque’s aura, and the conceptual history of the project. VIDEO CACHE was about extending what I have learned from analyzing grant reports and other administrative documents made available to me by SAW Video into a case study, by highlighting preservation issues from 2003 to the present and showcasing the collection as two different modalities.

VIDEO CACHE featured only 10 works of the 486 pieces in the Mediatheque, and this sample was anything but random. So I think it’s worth noting that selection is a subjective part of this preservation process. As the current SAW Video Director, Penny McCann was the best person to make a selection based on the videos’ connections to SAW Video’s institutional history and in relation to those involved in the development of the Mediatheque from the early 2000s on.  (McCann’s curatorial statement:

Eight artists who had work in the original Mediatheque were present for the VIDEO CACHE screening at GIV, on November 24, 2010. As a result, the act of curating, on and offline, along with the discussion that followed the screening, are directly linked to the process of documentation—this event is possibly the most complete piece of documentation that exists about the Mediatheque by the people involved in the project. (

We also discovered quirky and confusing things in the process of organizing VIDEO CACHE, that again speak volumes about the archive’s politics. From November 24, 2010 to December 24, 2010, 9 of the 10 videos screened at GIV were showcased online at Despite being remunerated $200 as part of the Mediatheque in 2003, the distributor, VTape, opted out of letting us show Gunilla Josephson’s Hello Ingmar (2000) for the month-long online exhibit of VIDEO CACHE. VTape continues its research into fees for streaming in order to develop a standard. This apparently applies to works already online and, as is the case for Josephson’s video, works for which the Mediatheque retains online showcasing rights in perpetuity. I don’t think this is VTape’s prerogative alone—the control over video art distribution, its value, and its position within art worlds and markets continues to be debated, with a prevailing Canadian bias towards the ‘web-means-dead’ credo for video art distribution.

Through the process of curating VIDEO CACHE, we unraveled many things about the Mediatheque archival method itself that feed back into the research on documenting the initiative. This is the ideal intervention for me: collaboration that emerges from research and that also uncovers and generates new threads, new concepts, and new problems. It is a highly self-reflexive approach and one that situates the archive as object and source of study.

More recently at the May 2011 Database Narrative Archive conference in Montreal (, Adrian Miles ( asked me why I thought it was necessary to activate or revive the Mediatheque project. I think that collectively we can decide whether there is value to a particular collection—after all, appraisal has always been a crucial step for archivists. Nevertheless, a digital loss or a server crash shouldn’t determine what we keep or discard. Until the Mediatheque is revived, VIDEO CACHE and the trail of documents that have come out of it (like this interview) constitute its main preservation efforts.

In your study of defunct or crashed video repositories, what issues would you highlight related to the sustainability of these types of projects? Are there any specific pitfalls you have identified?

Sustainability, by definition, is the capacity to endure. Endurance is built in to the idea of the archive, and online, as Wendy Chun argues, it’s the ephemeral itself that endures: “Memory, with its constant degeneration, does not equal storage; although artificial memory has historically combined the transitory with the permanent, the passing with the stable, digital media complicates this relationship by making the permanent into an enduring ephemeral, creating unforeseen degenerative links between humans and machines.” [3]

I think identifying pitfalls is a really important step in research that deals with emergent technology and social media. There is a lot of hype and a lot of excitement about the potential of the web to make things happen, and happen differently. That said, I think it’s important to be able to talk about failure in a generative way, even if highlighting issues related to sustainability is sometimes difficult. In this case, for instance, I am dealing with incredible, invaluable, long-established collections, but am addressing only their host organization’s relationship to the web—how they have resisted it, adapted to it, appropriated it, and so on. So I guess I want to start by saying that I recognize the value of the projects—even if they have ‘failed’—and that identifying pitfalls is in line with, rather than against, this kind of recognition.

Generally, what is most striking is that a lot of the pitfalls are relegated, and often mysteriously and suddenly, to technological failures, when in fact much of what happens to archives on and offline can be tracked back to human error and social/cultural parameters. This is what I was able to confirm in my doctoral research, and this is what makes it so complicated; it becomes impossible to make a bullet point list of pitfalls that we can all avoid and build from for future projects. I think engaging with and through technology requires a lot of knowledge on different levels (even with the democratization of media tools), including the upkeep of skills and tracking the constant developments. And this is often downplayed if not made invisible by the interface itself, which in a way becomes another pitfall.

Technology facilitates a lot of things, but ultimately it relies on human decisions and energy, and goals within a specific social, cultural, and legal context. This context also largely determines funding possibilities, the handling of copyright issues, the framing of the relationship between art and ownership, and so on, which then get coded into specific projects online. The process is iterative, and technology certainly influences choices in terms of format, access, and layout, but, as almost everyone I spoke with in this research makes clear, without (human) motivation and energy, online projects die. This probably goes without saying, but there seems to be lot more energy and money going into creating websites than into maintaining them. This is perhaps a pitfall too in the sense that the trend toward constantly creating new projects (though often duplicating entire systems) rather than centralizing or bringing content form disparate sources into one content management system might make upkeep more feasible. I believe this is something that Videographe plans to test out; there has been mention of offering up the viTheque repository as a template and/or platform for other institutions.

In my study of defunct and crashed online video art repositories in a Canadian context, I found that these philosophies of use differ greatly for each project, but most shared a common discourse about the role, place, and importance of the artist. There is a layer of each of the projects—and some more superficially than others—that reflects the history and trajectory of the artists as a category in Canada, as the first country to pay exhibition fees to artists (in the mid 70s). This is, of course, not the case in most countries, and so it explains some of the particular pitfalls that Canadian repositories fall into in terms of maintaining this professionalization of art into the digital realm, and under conditions that differ greatly from similar initiatives elsewhere. So copyright—or the way it is loosely interpreted and applied—is a major element, and I would say pitfall, in most cases of Canadian online video art repositories.

Another pitfall, I think, is the way copyright is being interpreted and, in turn, how technologies are being used to put into measure some of these ideas that, from an archival point of view, seem to pose additional problems rather than provide viable solutions. Technological protection measures, like files that self-erase/destruct after a period of time (chronodégradable), locks based on password protection, locks that limit the number of copies a user can make, and so on, are all ‘solutions’ justified by the desire to protect works from illegal copying (and which by default block fair and legitimate copying). To impart technology with these roles—rather than engaging with these issues as a social process that accounts for fair dealing—is to misconceive of the function of copyright and to throw off its intended balance. Also, with increasingly long terms of copyright (across the globe), this kind of copyright rhetoric becomes commonplace, and access online somehow becomes in itself conceived as an assault to artists’ rights.

Copyright is a major issue, if only because it is conflated with other issues, and as a result, those underlying issues aren’t directly addressed. Copyright—and Creative Commons for that matter—are not systems of remuneration for artists, they simply inform the parameters for using other people’s stuff without asking, beyond fair dealing.

The initiative to create an online repository requires a huge amount of time, resources, knowledge, and money. This is a point I will keep repeating because being for or against copyright isn’t at the crux of the matter. And, while I think that for the most part an open and free exchange of materials circulating via the web is positive for creativity, I do think copyright and Creative Commons alternatives demand that we continue to question ownership in the face of large user-generated content sites that have at their disposal untapped media content.

So this brings me to the issue of funding and financial sustainability. In the projects I have looked at, it seems that funders (often government funding bodies) are eager to fund the creation and development of online repositories for about two years, after which it remains a bit unclear what is expected or how the project is meant to maintain itself. For the most part, these projects are not self-sustaining, and bring in very little in terms of revenues, at least in comparison to the costs incurred maintaining the site.

I try to always think of these pitfalls and failures as generative, but I also think that we have many (too many) examples of how trying to contain and control digital flows backfires in terms of preservation strategies.


1.; “The Gap between Now and Then: On the Conservation of Memory” in Nettitudes
, Let’s Talk Net Art NAi Publishers (2011).


3.; Wendy Hui Kyong Chun (2008) The Enduring Ephemeral, or the Future Is a Memory In: Critical Inquiry 35 (Autumn) The University of Chicago: 148.




e-Artexte: An Interview with Corina MacDonald

What follows is part of a conversation that was started through email in March, when Corina was settling into her new job as the e-Artexte Project Manager. e-Artexte is an Open Access (OA) repository for visual arts publishing in Canada, and I think it is safe to say that nothing quite compares to it in terms of its objectives and scope. The repository will offer publishers and authors the option to make their publications available in electronic form, with all the benefits that come from Open Access: metadata harvesting, access through Google Scholar, and so on. Corina is an information specialist, but her job, along with the e-Artexte team, is also in advocacy and outreach to convince Canadian publishers and authors of the benefits of the project and of Open Access. The e-Artexte project is expected to launch in the fall of 2011.



Mél Hogan: Who initiated the OA movement – where did it grow out of? Has there been resistance to the idea of Open Access?

CM: Basically the OA movement is founded on the idea that publicly funded scholarship and research should be freely available for unrestricted use. There are philosophical similarities to parallel movements in free software and culture, although OA really gained momentum in the early 90s as a response to what is known in academic libraries as the ‘serials crisis’.

Unfortunately this crisis has not since been resolved – the term refers to an ongoing situation where large journal publishers exert a monopolistic stranglehold over academic libraries and unreasonably escalate the costs of subscriptions. This has had serious repercussions for scholarly publishing. The rising costs of journal subscriptions are not matched by increases to library budgets, and so libraries have been left scrambling to provide access to the journals that their faculty need, often at the expense of other acquisitions. Faculty are still mostly unaware that the articles they publish in proprietary journals must be bought back by their libraries at increasingly high rates. Practical alternatives have emerged from this situation in the form of ‘Green OA’ (self-archiving) and ‘Gold OA’ (OA journals).

Many of the large journal publishing companies have adapted to Open Access, and have allowed authors to self-archive articles in institutional repositories under varying conditions. Ironically I think some resistance comes from within the academic milieu itself, where there is a lack of awareness of the situation and prestige is still the overwhelming factor in publication and tenure. This varies by discipline; in the sciences, there has been greater involvement in OA and many groundbreaking projects have emerged from that community, such as the Public Library of Science ( But overall there remains a real need for greater education about these issues for authors in academia.

I’m not an expert on all the historical details of the OA movement, so here are some links for further information:

Open Access Overview Peter Suber

Open Access Archivangelism Stevan Harnad (blog)

The Access Principle John Willinsky (e-book)

MH: What does self-archiving mean? How important is the idea of self-archiving in and for digital collections online?

CM: Self-archiving is a term that is quite specific to the Open Access (OA) community. It refers to the process whereby authors, usually from within a university context, deposit digital copies or pre-print versions of their published journal articles in an institutional or thematic OA repository. Many of these repositories offer support for the long-term preservation of the digital content they hold, but I would argue that access is an important impetus for self-archiving and so the term archiving can be somewhat ambiguous here. Self-archiving is an important strategy for Open Access, and many universities are considering making it a mandatory step in publishing by faculty.

Personally, I think that the concept of self-archiving can be relevant to many kinds of digital content creation. For example, I think that artists should be much more proactive about explicitly licensing and making available images of their work online for non-commercial use. Many artists would be happy to allow writers and bloggers to reuse images in their posts and articles online, but by not explicitly defining this reuse, they are essentially contributing to a large grey area of online activity. This is a backwards way of dealing with the situation in which we find ourselves.

So I guess that, for me, the concept of self-archiving can be broadened into the open culture context as a responsibility to explicitly make at least some of your content openly accessible for reuse. There are many tools available to do so – Creative Commons licenses, the Wikimedia Commons and projects like One for the Commons.

MH: If e-Artexte positions itself as an online archive, then how does it define or imagine dealing with the long-term care of its collection? What is its main priority as an archive: use or preservation?

CM: e-Artexte will extend Artexte’s mandate to provide reliable information sources for research in the visual arts. Artexte as a library does not have a mandate to preserve the documents in their collection – they make these documents available for consultation and do their best to ensure their longevity, but do not have the resources or capacity for long-term preservation.

The same approach will apply in principal to e-Artexte – provisions will be put in place to try to ensure the longevity of the digital content, but increased access to research material is the primary goal of the repository.

MH: Can you talk about licensing tools (presumably Creative Commons) and explain what interoperable standards are? What does ‘interoperable standards’ mean?

CM: OA repositories do not normally hold any copyright over their contents. Usually there is a agreement with depositors stating that they hold the rights for any content they upload. By default, material is freely available for unrestricted use, as per the definition of OA, but in some cases rights holders may choose to use Creative Commons licenses to make some restrictions on the use of their content (i.e. no commercial use).

Interoperability is really the backbone of networked culture. In this specific context, when we talk about using interoperable standards what we mean is that one repository stores and can export data in the same (or compatible) format as another repository. One of the important functions of a repository is to provide metadata for ‘harvesting’ by search services which can aggregate and search across multiple repositories at once. For example if you visit the OAIster website, you can use a single search box to cross search data from over 1,100 different contributors. Repository content is also harvested by Google Scholar. There are specific metadata and protocol standards that enable this interoperability.

MH: Can you expand on the idea of the ‘content provider’?

CM: We are in a networked culture where data is constantly moving around, being selected and recombined along the way by different types of services. As a result, the way we think about providing access to content is also changing. It is one thing to develop your own website where users can come to discover your content – but there is now an opportunity to make content available more broadly and in new contexts through federated search services or content aggregators, OAIster being one example.

To extend this idea we could also consider the city as a content provider, for example cities like Toronto, Vancouver and Ottawa that have adopted Open Data policies. In doing so they have been able to benefit from the experimentation and work of their citizens. There are communities of developers eager to get their hands on municipal data so they can build applications that tell people when the next bus is coming, or allows them to report on needed repairs in their neighbourhoods, etc. (there are many, many more examples). By making city data open, it suddenly spawns multiple contexts that city councillors may never have envisaged, and that the city may never have had the expertise or resources to develop on their own.

Cultural institutions like libraries, archives and museums have very rich content that they can make available in a similar way, allowing them to benefit from the imagination and innovation of their communities. Not every museum will be able to develop their own augmented reality app that provides contextual collection information based on GPS coordinates for example, but they can take the steps necessary to make sure that their content isn’t inaccessible when those developers come knocking. The Brooklyn Museum and The Powerhouse in Australia are two museums leading the way in this regard, having created open APIs for information about their collections.

So I see this role of content providers as an important shift in how institutions can imagine themselves contributing to a larger landscape of networked resources, allowing their content to live in new and possibly unforeseen contexts. Open Access repositories are one component of this landscape.

P.S. A group called Montreal Ouvert is working on convincing the city of Montreal to adopt an Open Data policy.

MH: What other models or projects out there have you referenced or used to build e-Artexte?

CM: We are using Eprints as the basis for the repository, which is an open source repository software developed and maintained at the University of Southampton. However, this software is configured ‘out of the box’ for an academic publishing context. So we are adapting this system to a visual arts context, and specifically to the context of Artexte’s existing cataloguing practices as an organization that has been collecting arts documentation and publications for 30 years. I am not aware of any other thematic OA repositories dedicated to critical art writing, so this is a pioneering project in many ways. It also leverages OA outside of the academic milieu which has not been done extensively.

MH: What are some of the obstacles you have faced? Are there worries when launching a project of this scale in terms of backups, and the ephemeral nature of digital media? If so, what kinds of precautions are put in place to ensure the long life of the project? What kinds of human labour and investments are required to keep this project going after you have created the site?

CM: This is definitely an ambitious project for a small organization to undertake. We are fortunate to be working with university colleagues who have experience in OA, and there is also a large and active Eprints community that we can look to for guidance.

There are certainly some obstacles in terms of the ongoing resources required to maintain the repository, but some of the decisions we make now will help to minimize future costs or problems. I think because we are using well established and actively maintained open source software, we can feel fairly certain that it will be sustained for some time into the future. Interoperable open standards are also an important foundation of digital preservation and sustainability. Of course the project will still be vulnerable to the vagaries of hardware failures, server crashes and other unforeseen disasters. Backups will need to be made regularly, and in keeping with the library mantra of LOCKSS (Lots of Copies Keeps Stuff Safe), we can keep multiple copies of backups in different locations.

I think what will be most crucial to the ongoing sustainability of the project is really the engagement of authors and publishers. There is education and outreach to do in the visual arts milieu about Open Access and the benefits of depositing work in an OA repository. e-Artexte has the potential to become an important research resource, but it must be cultivated over time with the collaboration and participation of the community. So in a sense, building the repository itself is only the first phase of this project.

Corina MacDonald is an independent information specialist and web developer specializing in digital cultural collections management. While a student in Information Studies at McGill (MLIS, 2006-2008) she worked as a research assistant with the DOCAM research alliance (, where she learned about issues surrounding the documentation of digital and technology-driven art. After graduation she worked at the Canadian Heritage Information Network on the Artefacts Canada database, an aggregation of data from over 400 Canadian museums. In 2010 she began doing freelance software and web development work, and is currently the project manager for e-Artexte – an Open Access repository for Canadian visual arts documentation initiated by Artexte. When she isn’t crunching metadata she is djing and hosting modular_systems, a radio show on CKUT 90.3FM where she has been involved as a volunteer since 1996. She is also a member of the editorial team of Vague Terrain, an online digital arts publication and blog.

OutCrowd: the Interview that never was

Questions for Mél Hogan, by OutCrowd (05/2011)

When you think of someone reading “No More Potlucks,” who do you instinctually imagine as the reader?

I first imagine the readers to be the people featured in the journal. And then I imagine that their friends, their communities, their families, and their online social networks become readers. And then some readers become contributors, and the cycle repeats itself, and grows.

Your most recent issue, titled “animal,” explored our recent fascinations with animals and our more primal side as humans. Where do you find the ideas for your themes?

Ah. Good question. It’s actually a lengthy process and an important one. M-C MacPhee (content curator) and I make long lists year after year of words that we think might make good themes. What makes a good word? Well, usually we like a loaded word, and by that I mean a word that can be interpreted in a lot of different ways and that is rich in meaning on all those fronts. Trespassing. Fixate. Ego. Anonyme. Words that have multiple meanings are great because they help us to imagine different types of contributors. So that’s what we do next; we associate words with artists, activists and academics whose work we feel strongly about and when we have a good match, we make it a theme.  My dream would be to have words that are bilingual (like Animal, Rage, Rural, Chance, etc., for each issue, but since that doesn’t always work out, we have one or two in French each year—the two languages of NMP also add layers of meaning for themes, as you can probably imagine.

Do you fear for the future of the ‘magazine’?

Not at all. Why? What is there to fear in the future?  Based on what has developed in the last five years in terms of open source content management systems and print-on-demand, I can only imagine what another ten, fifteen, years will bring…Drupal, DropBox, Lulu, etc are all web-based services NMP relies and that make doing what we do not only possible but possible with so little money. We have no funds, except for a party each year (thank you Miriam Ginestier!) and a few generous donors who give us money, with no strings attached. But basically everyone in NMP—editors and contributors alike—is volunteering their talent, energy, and time because they believe in the project, and presumably gain and generate value in other ways.

So I don’t fear for the future of the journal in terms of sustainability. What we can do now we could never have done even a few years ago. The sustainability of the journal depends a lot on the efforts and the drive of people – technology and money are second to that.

I definitely could not keep NMP going without M-C MacPhee, who has been my best friend for more than a decade. It might sound corny, but at the heart of any good project is a good relationship. NMP actually has a great team of people helping out in different ways, to different capacities and with varying time commitments. Info on our team is found here:

Of course if someone wanted to give us a lot of money so that NMP could become a full time job for 2 or 3 of us, that would be the most amazing thing ever. But so far, I’d say that I’m quite reluctant to think that money would necessarily make NMP better. I’d love to pay contributors and editors, etc, of course, but I’d want for that to be substantial, otherwise managing money just becomes another task to take on…and we’re pretty full on as it is!

For now the momentum is SO great—we are booking issues one year ahead of time (!!)—so we feel energized and inspired by this. We also get invited to speak about NMP a lot, at art festivals and academic conferences, so we’re fueled by the support we are getting.

What worries me though is the state of the internet more generally—will there be usage based billing, will throttling continue, will copyright become (more) of a hassle, will the web turn into (more of) a giant shopping mall… the bigger picture worries me a little because right now I feel really free doing what I do with NMP, though I am aware that the web could be transformed considerably by regulations/policy in the next few years. I’m hoping the strong counter current to these commercial forces will maintain the balance, keep everyone in check, if not tilt the web in favour of continued experimentation and creative freedom.

Where do you imagine a line of censorship for such a free-thinking magazine? What falls to the cutting room floor?

The only time I considered the issue of censorship was in an early issue where we had a ‘porn’ video and I worried that our ISP or host might flag it. Pretty sure we agreed not to host “adult” material when we bought our server space. But nothing like that ever happened, but it does make us cognizant of the fact that ownership of content online, and control over it, is murky. So yea, I do worry of the general policing of the internet… In that way we have very little control over NMP. But I personally accept that as a risk of doing stuff online, along with server crashes, sites getting hacked and spammed, and so on.

What falls to the cutting room floor isn’t really about pieces that go too far or are explicit in ways we aren’t willing to stand by – in fact we encourage people to push the boundaries of acceptability (acceptable to who?) in NMP. This is true both in terms of experimental writing and multimedia presentation of work, and in terms of content. What doesn’t make it in—though it has happened that we’ve turned pieces down—are just pieces that aren’t quite ready for the deadline, that we normally rework for a later issue. NMP is 90% by invitation, so when we ask people it’s because we are already familiar with their previous work. More and more we are getting outside proposals though, so we’ll see if that changes the process. We encourage proposals and are open to change.

How do you communicate your style to contributing artists—is there a way that you expect them to think?

We usually refer artists and other contributors to previous issues for them to get a sense of what the journal is about. Whatever they ‘get’ from NMP, that’s usually enough to guide their submission.

We don’t have a mandate but we somehow, I think, have a very strong editorial voice. We really encourage people to publish stuff they can’t see being published anywhere else. For academics in particular this can mean work that isn’t accepted by more traditional peer-reviewed journals that normally have a very long turnaround, and works that are presented as video, audio, or any combination of these things.

For artists, NMP is a great place to not only showcase their work but to have it reviewed and written about, either by being matched to a curator or an NMP editor. McLeod’s video series is a really good example of this—each issue has a featured video that is documented and reviewed by a curator. It’s very important to write about art and to get interviews with artists to be posted alongside the work itself. For activists, we think NMP is a place to be heard—it’s definitely an alternative to a newspaper or a blog, in part because it’s within the context of an arts and culture journal.

I’m always happy when people write and tell us NMP is THE place they want their work published, and not because it doesn’t fit in other contexts but because NMP is the best one for whatever they are producing. This has happened and I love hearing how and why NMP works for them, and I feel like that’s because of the amazing content and how they relate to it.

For me its important to balance the artist-academic-activist content in each issue, but as far as what people contribute, we’re very open and often publish things that push our own boundaries as editors, or that we don’t fully agree with, or that we’re not sure we fully understand. We try to balance that with being accountable and responsible for the overall publication seeing as one contribution belongs to an issue and influences the overall content of NMP.

What do you hope for a reader to think after closing the magazine and moving on to the rest of the day?

It’s funny because sometimes I’m so busy working on the details of getting the publication together online and in print, that I don’t take time to think of the important questions, like this one.

Off the top of my head, what I hope readers get is a sense that there is a lot going on in Canada in terms of art, theory and politics. And I hope that reading about it, or watching/listening/reading about it inspires readers to make things; either start their own journal, make art, make noise in their communities, or pitch an idea to NMP!

How do you your design in No More Potlucks express the relationship between images and writing?

This is an interesting question because part of it is about my limitations as a web designer for the online version, and the way we need to have certain features automated for the sake of consistency …but also to have a stable workflow. So that means that things, like the article thumbnail, might crop and scale to frame something differently than if I could do it all manually, but the trade off means I can take this on and stay sane by maximizing the potential of the content management system. Over time, I imagine, I will make these things even better. When we initially designed NMP we never imagined it would take off the way it has, so to go back in time I would revise the back end and front end design a lot. This would mean that, to answer your question with a specific example, we could insert images through our back end interface within the text, rather than just at the top of the piece. When we do insert images in the body of the text, as some pieces require it, we do it through FTP. Maybe that’s too technical or specific, but anyway, it’s just to say that for the online version, these things are simultaneously flexible and restraining.

For laying out the print version (which has become so much ore enjoyable since working with Momoko Allard) we have a pretty standard template, which we improve each year. I love our look now, in print. We decided to have a lot of white space and let the images and texts breath. We design the issue from a grid; two columns for most texts, and a (double) one column for fiction pieces. We are a hybrid, in terms of layout, between an art catalogue and a journal, so we design each piece to be relatively the same, and each issue to resemble the one before.

All the issues are available from Lulu, via print-on-demand (

Choosing the cover image is pretty intense, sometimes. A great image isn’t necessarily a great cover. And as we have learned a cover speaks loudly about who people think we are and what people assume we represent. So I chose very carefully… Over time, the covers a s a collection of images takes on its own meaning, and I think represents well the general idea of NMP. But I leave it up to you to say what that is…what the connections are between themes, images, etc.

What is ugly to you?

Injustice. Insecurities. Taking things too seriously. Inequality. These things are ugly.

In terms of design aesthetics, I’m probably quite conservative. I like clean lines, minimalist and simple grid layouts, and the choice of one good font for body text and one slightly more illustrative for titles. I think good design is about knowing why you are putting things where you are putting them. Everything has a place and until you really know that, you don’t mess with the rules, you follow them! I do still feel quite limited in my CSS skills to get NMP to look exactly how I want it to in Drupal, but it’s OK for now – the design works.

In print, the design is where I want it to be. 

Many of the articles in your magazine cover diverse transnational subjects. How have you navigated the magazine’s multicultural, multi-lingual perspective in a way that inspires universal interest?

We strive to showcase a lot of Canadian content, at least 75% of any issue. That said, who and what counts as Canadian is open and we don’t have a firm take on that. But we are quite strict on maintaining a certain Canadian-ness in whatever shape and form it takes on given that it would be so easy to fill the pages with American content—there is so much being produced south of the border that resonates with NMP.

Which issue would you recommend for a first-time reader?

Pretty soon all the issues will be free online—we are ditching the subscription model—so I would recommend that someone just playfully navigate the site and read it diagonally… whatever draws them in, randomly or thematically, for research or leisure.

What is a magazine without its design?


Of course if you ask the designer they’re going to tell you it’s really important… but seriously, I think design is funny in the sense that if you do it well, the work and craft of it disappear, and so it is not really recognized (except by other designers, usually). I think NMP could use a slight upgrade – a slight freshener. As the art director I try to balance those changes with the consistency of what has become NMP and what people expect when they visit the site.

I think design is communication. Design says as much (more?) than content. Design speaks to us on another level though many of us haven’t developed the affective vocabulary for it nor a shared sense of how colour, shape and form appeal or repel. We feel it, but we don’t understand it necessarily. And so the layout of the website, and print journal, means we read the content differently whether or not we are aware of the design.

How do you judge a really successful issue?

I’m not sure. For me, there have been a few little dances-of-joy and virtual high5s when I get someone whose work I really admire to be in NMP… like Ann Cvetkovich, Laura Murray or Jane Anderson… or Mary Bryson, Kim Sawchuk, Anne Golden, Line Chamberland, Jane Siberry… all these amazing thinkers and doers… the list is quite long now. To me this is a great measure of success.  The mix of academic, activist and artist content is a measure of success too. As is diversity by all definitions.

There have also been pieces that have had an insane amount of hits: I’m thinking here of Sarah Maple’s work, the piece on the late Will Munro, and pretty much anything Yasmin Nair writes. I can see from our stats, and more recently through a visible counter at the bottom of each page, which submissions get the most attention. This is also a measure of success.

So far though, I think each issue has been successful by virtue of being up on time, out in print, and full of amazing content… and this for 15 issues now. 

Review: Chosen by Jackie Gallant

This review is part of an ongoing series of video art reviews located at Wayward. I only review work I love.

Chosen by Jackie Gallant

If I was a curator, I would programme Chosen into every possible screening.

Chosen remixes, re-voices and reconstructs the starstruck gaze.

Probably a few of you, like me, think of exquisite octopad drumming when you think of Jackie Gallant. Who would even know about the octopad if it wasn’t for Gallant? Not me. Three years ago I interviewed Gallant in ArtThreatwhere she revealed a few secrets about her rock and punk roots, and admitted to the pleasures of performance–“the tightrope you walk on when in front of an audience.”

Owen Chapman (DJ O+) also included Gallant’s insights about sampling inhis dissertation, and highlighted Gallant’s gift for improvisation and manipulation.

What does this have to do with video art? With Chosen? My guess is that it has everything to do with it.

Gallant knows pacing, rhythm, and timing. And Gallants knows sampling: she knows how to extract the good bits, and how to mix them up, manipulate them, how to make them fit together to reveal something else, something greater.

Gallant’s craft at weaving media, sound with image, reveals the fast-paced, absurd, funny, and most often tragic feeling of celebrity hype. Lindsay Lohan is chosen by Gallant to reveal, if not testify, her own half-truths, often slippery clichés and cringe-worthy teenage delusions. And yet there is nothing here of judgement — the artist’s appearances in this dizzying underwater of expectations suggests that Gallant relates to rather than rejects the awkward trajectory in and out of the spotlight.

See more of Gallant’s videos on 52 Pick Up Videos: 2009 and 2010.

Mél Hogan, May 18, 2011.


DNA Symposium: Lightning Talk on Failure

I’m going to tell you a story about an archive.

Canada has one of the oldest online video art archives, if not the oldest, on the web. Few people know about it.

Launched in 2003 after a year as a pilot project, SAW Video in Ottawa was responsible for creating a repository of 486 independently-produced videos accessible for free online, in full length. This project was called the Mediatheque. Its web infrastructure was custom-built, and the archival process relied heavily on trial and error as no precedent existed for this kind of endeavour.

The funding for the Mediatheque was allocated specifically as an archival grant under the Canadian Culture Online Funding Programs from the Department of Canadian Heritage. Preceding YouTube by two years and reaching a terabyte (over a 1000 GB) of content , the Mediatheque placed an open call for video artists and paid 200 dollars per submission for the rights to showcase works for 3 years online.

The Mediatheque lived long enough to see this contract with artists expire, though most artists agreed to renew the rights, this time without payment. In this second phase, starting in 2006, the Mediatheque continued to add new works, and featured more than 300 videos from the original pool.

In 2009, the Mediatheque’s server crashed and the project has been offline since. There was no database backup at SAW Video, nor with their corporate sponsor who was hosting the project. Neither had assumed it their responsibility. Saw Video reassembled its website using Google Cache but never attempted to piece together the Mediatheque via the Wayback Machine or other means. For SAW Video, the crash represented an opportunity if not a cry for change. It was time for reflecting on the project in relationship to emergent social media that now largely constitute the web. After more than 6 years online, had the needs for this archive changed? Had the context for video art expanded in ways to render the project obsolete? Revived, would the Mediatheque be a relic, or does it remain a failure of the very concept of online archiving by virtue of its ephemeral nature? Is failure embedded into the concept of the online archive?

Apart from the grant application and few reports to the government, little documentation exists about the Mediatheque, and nothing at all exists that attempts to answer these questions or to situate the Mediatheque within the framework of media studies, archival theory, or video art’s art history.  (Besides my doctoral work, that is).

Today, SAW Video plans to rebuild an archive containing many of these videos, however no longer under the name Mediatheque and with no necessary attachment to the 2003 version. In this sense, the new repository is neither a straightforward continuation nor an attempt to replace the defunct project. The new version was originally intended for December 2010, but continued delays suggest that the conundrums of the online archive remain, for which pragmatic and philosophical questions alike are difficult to answer of in the long-term thinking demanded of the archive by very definition.

So the burning question is: is failure embedded?

Web Archeology

Documenting my ‘digs’ of video art online repositories from a Canadian cultural context here:

And for the Mediatheque, here.

Over the course of the next few months, I will post video clips at that explain the history of these now mostly defunct websites.

Because it is a) difficult to describe a trajectory back in through the wayback machine (internet archive) that include numerous iterations of a same project and because b) it is almost impossible to do the same ‘search’ twice, I am recording my findings and posting them here.

These digs are totally unrehearsed and unchoreographed, which means that I often get lost in the regenerated loops of the wayback archival process, and take you with me. These digs are meant to record the research process as much as they are intended to document the portals I explore.

VIDEO VORETX READER 2 Crashing the Archive/Archiving the Crash (2011)

Hogan, Mél. “Archiving the Crash/Crashing the Archive” Video Vortex Reader II: Moving Images Beyond Youtube Amsterdam – Institute of Network Cultures. 2011

Video Vortex Reader II: moving images beyond YouTube

About the book: Video Vortex Reader II is the Institute of Network Cultures’ second collection of texts that critically explore the rapidly changing landscape of online video and its use. With the success of YouTube (’2 billion views per day’) and the rise of other online video sharing platforms, the moving image has become expansively more popular on the Web, significantly contributing to the culture and ecology of the internet and our everyday lives. In response, the Video Vortex project continues to examine critical issues that are emerging around the production and distribution of online video content.

Following the success of the mailing list, the website and first Video Vortex Reader in 2008, recent Video Vortex conferences in Ankara (October 2008), Split (May 2009) and Brussels (November 2009) have sparked a number of new insights, debates and conversations regarding the politics, aesthetics, and artistic possibilities of online video. Through contributions from scholars, artists, activists and many more, Video Vortex Reader II asks what is occurring within and beyond the bounds of Google’s YouTube? How are the possibilities of online video, from the accessibility of reusable content to the internet as a distribution channel, being distinctly shaped by the increasing diversity of users taking part in creating and sharing moving images over the web?

Contributors: Perry Bard, Natalie Bookchin, Vito Campanelli, Andrew Clay, Alexandra Crosby, Alejandro Duque, Sandra Fauconnier, Albert Figurt, Sam Gregory, Cecilia Guida, Stefan Heidenreich, Larissa Hjorth, Mél Hogan, Nuraini Juliastuti, Sarah Késenne, Elizabeth Losh, Geert Lovink, Andrew Lowenthal, Rosa Menkman, Gabriel Menotti, Rachel Somers Miles, Andrew Gryf Paterson, Teague Schneiter, Jan Simons, Evelin Stermitz, Blake Stimson, David Teh, Ferdiansyah Thajib, Andreas Treske, Robrecht Vanderbeeken, Linda Wallace, Brian Willems, Matthew Williamson, Tara Zepel.

Colophon: Editors: Geert Lovink and Rachel Somers Miles. Copy Editor: Nicole Heber. Design: Katja vay Stiphout. Cover Image: Team Thursday. Priner: Ten Klei, Amsterdam. Publisher: Institute of Network Cultures, Amsterdam. Supported by: the School for Communication and Design at the Amsterdam University of Applied Sciences (Hogeschool van Amsterdam DMCI). The Video Vortex Reader is produced as part of the Culture Vortex research program, which is supported by Foundation Innovation Alliance (SIA – Stichting Innovatie Alliantie).


To order a hard copy of Video Vortex Reader II email:

Geert Lovink and Rachel Somers Miles (eds), Video Vortex Reader II: moving images beyond YouTube, Amsterdam: Institute of Network Cultures, 2011. ISBN: 978-90-78146-12-4, paperback, 378 pages.

Review: Wicked Games

Originally posted @ Wayward.

Wicked Games by George

I find this video magical. And haunting. And really hard to write about.

I saw George’s video at the EDGY WOMEN festival, in a programme curated by Dayna McLeod, the founder and project manager of Wicked Games was one video among 39 others at the festival, created by 26 artists who currently contribute to 52pickupvideos, or who have done so in the past. It’s an amazing online venue for artists, and is open to newcomers who are willing to take on the challenge of making a new video each week, consecutively, for one year.

Wicked Games can be found here:

What is also worth noting about 52pickupvideos is that it invites artists — in the case of George, a dancer and choreographer — to express, experiment and work through video no matter what their background or prior experience with the medium.

Wicked Games stood out for me at the EDGY WOMEN festival screening, though I haven’t found it easy to pin point why or what kind of effect it has had on me. There is something about the seamlessness of this video and careful crafting of sound that makes the video hard to dissect after the fact, though in the moment – watching it – I was fully captivated.

The plural of ‘Games’ in the title hints at the way this piece is crafted: playful but definitely wicked, too. It captivates and repels. The wicked games in this video are the levels of reality; the intense gaze from the moment the video starts in synch with an accelerated slow-motion that sets the tone and speed of the piece. A swaying man stares into the camera signing Chris Isaak’s Wicked Game over the sound of a very present creaky floor. The man’s gaze is intense but not inviting, and is interrupted by a high contrast black and white version of himself. These ‘interruptions’ bring in an unmistakably iMovie aesthetic to the video, a formal decision that speaks not only to George’s use of video to comment on video, but of editing to comment on movement.

A second chapter begins when the two characters appear in the frame for a forced and constrained dialogue – a gesture marked in the narrative by the ‘main’ character leaning forward, indicating that he is turning on/off the camera. This suggests a new level at which the viewer is expected to interact. The viewer shifts from witness to audience: we are invited to acknowledge the act of recording, the presence of the camera, and a performance that is in itself only made possible by its re-presentation in ‘real time’. In this way, the work demands the attention of its audience, and in turn, the audience makes the work complete.

Mél Hogan, April 7, 2011.