This is an immersive VR political comic, telling a satirical story using WebVR software, and exploring the aesthetic and narrative possibilities of the medium.
It tells the news story of the UK government’s hugely popular ‘Eat Out to Help Out” scheme during the Covid pandemic (2020), where 100 million restaurant meals were consumed. The scheme arguably boosted Chancellor Rishi Sunak’s political popularity, but evidence shows the scheme likely contributed to the deadly second wave of the pandemic.
Artistic influences on this work include André Fougeron’s painting ‘Atlantic Civilisation’, a panorama of over 5.5m wide, telling a vast political narrative, in a comic style. Other influences were Cold War Steve’s vicious political photocollages and David Hockney’s wide immersive panoramas.
The work explores the aesthetics possibilities of WebVR, using the A- frame framework.
An A-frame scene is composed of multiple layers of drawings, separated by distance along the Z-axis, and overlapping. Authoring HTML in A-frame forces a layered approach, as a scene contains a list of entities. This aesthetic – the separation of successive 2D layers within a 3D space gives a feeling of space and depth, resulting in an effect like a traditional Diorama.
I was attracted by the fishbowl distortion of the drawings, when moving around the immersive 3D space, which felt like moving around inside a sphere, zooming in and out of the entities.
This is a panoramic story, told over time. A-frame seems to lend itself to a wide or panoramic storytelling.
An experiment in AR aesthetic approaches, and narrative composition in political comics.
This “AR Vignette” is a tiny multimedia comment on the news. The AR effect is a 3D montage of overlapping flat planes, similar to old traditions, particularly Dioramas. These AR montages (through association, positioning and juxtaposition) tell stories in a particular aesthetic.
The technology used is AR.js and A-Frame. AR.js is a lightweight library for AR on the Web. A-Frame allows the creation of 3D Scenes and Virtual Reality experiences.
The subject for the work is the UK Care Homes during the Pandemic. The Government imposed strict rules, no visitors or physical contact. Daily on TV Health Secretary Matt Hancock announced rules from the podium – social distancing, no hugging. Yet at the same time, he was having an affair with his secretary, meeting in secret at Whitehall.
His response was “Well, I’m only human”
Composition analysis
The park alongside Parliament serves as the backdrop and Tracking Image. Scanning the Tracking Image opens up Hancock’s face – big with a transparent mask. We see through where he speaks, which is where we witness his betrayal, secretly embracing his mistress. In front of the mask, we see a care home with a family gathered outside, waving through the window. On the opposite side, Hancock stands at the podium. Hancock apologises throughout.
This experiment into narrative composition and the aesthetics of AR considers narrative possibilities of see-through shapes and over and around layered shapes and objects, exploring foregrounding, depth, layering, and what lies behind. It tells a story through layering and positioning, and transparency within drawings. It explores photocollage, digital drawings, mixed media, multimedia, and narrative composition.
Matt Hancock, the Health Secretary, is taking part in the opening ceremony of a new hospital. A head nurse stands next to him. He sneezes, sprays goes up, and virus floats across the air.
She winces …
To view the AR …
On your mobile phone, start a browser (ideally Firefox or Chrome).
In your browser address bar, type the following: https://boomar.github.io/hancock
You will then be asked to share your camera (say YES)
Point your camera at the image below.
You will see an augmented scene, on top of the image. Hancock (UK health minister) sneezes and infects all the NHS staff!
Traditions
These recent AR miniatures or “vignettes” do seem similar to old traditions – here are a few:
Dioramas The word diorama can either refer to a 19th-century mobile theatre device, or, in modern usage, a three-dimensional full-size or miniature model, sometimes enclosed in a glass showcase for a museum. Dioramas are often built by hobbyists as part of related hobbies such as military vehicle modeling, miniature figure modeling, or aircraft modeling.
The word “diorama” originated in 1823 as a type of picture-viewing device, from the French in 1822. The word literally means “through that which is seen”, from the Greekdi- “through” + orama “that which is seen, a sight”. The diorama was invented by Louis Daguerre and Charles Marie Bouton, first exhibited in Paris in July 1822 and in London on September 29, 1823.[citation needed] The meaning “small-scale replica of a scene, etc.” is from 1902.
Point your camera at the image below. You will see an augmented scene…
You will see:
Johnson walks with his girlfriend through the bluebells at his country estate (Chequers). A plane carrying PPE flies past in the distance, unnoticed. Countryside and flowers, birds singing. PM blissfully disiniterested/ unconcerned in 1000’s of deaths around him, that he caused, and still does nothing about. Cavalier incompetence over the coronavirus outbreak. He likes his country breaks. Good croquet on the blossom-strewn Chequers lawns
Sherwood Rise was the world’s first augmented reality (AR) novel, which was the result of a Post Doc at the University of Bedfordshire dedicated to experimenting with the future of the book, and how to make a physical book interactive. Dave Moorhead wrote the script, and the result was an interactive story that really pushed the boundaries.
This was an AR transmedia interactive graphic novel and game, told over 4 days through a range of media and formats: newspapers, AR on mobile phones, emails, websites, blogs, sound, music, and graphic novels. The story was basically the classic Robin Hood story applied to GB post financial crash.
Over 4 days you receive a newspaper which you can interact with, via AR on a mobile phone. Your interaction updates a database, which then dictates the newspaper edition you receive the next day. This is a type of “real game” where you simulate taking part in a revolution and are forced to take sides.
It’s a story told in a range of media on multiple platforms, to expand a traditional printed story, adding additional layers of story through AR, and an interactive story where readers determine the outcome. This was a research collaboration between Dave Miller and Dave Moorhead, ost-doc research funded by the University of Bedfordshire and UNESCO Future of the Book project, 2012-14.
An academic article published in ‘Convergence: The International Journal of Research into New Media Technologies’, describes the project in detail.
The AR browser technology (Junaio) used as the basis of the story is now discontinued, so I am unable currently to provide a link to a working version of the project.
Book about spies and the French resistance during WW2. The book was already written (Alex Gerlis) & published, and the author wanted a locative AR mockup/ working demo to present to Publisher (agents). The concept was to expand the book using AR and Layar.
The solution needed to use image recognition (to recognise markers in the book) and also location.
Images/ icons in the book via AR open up layers of additional info. These are maps, sounds, videos, materials ideally that exist already through the original book research – and the AR brings them to life.
Location based Points of Information (POI’s) – places described in the book. The idea is to show interesting things at these specific locations, which are relevant to the story, and in doing so extend/ expand the story. Much of the story was based in Northern France in WW2 and the AR pulled in photos and videos.
Here are some of the drawings used in the demo (images that were triggered by the AR):
This is an interactive story, which tries to work as a conversation and visual story combined. It deals with a controversial, topical and emotive local issue, concerning the development of Crystal Palace park (in London) after many years of neglect. Lots of plans and architectural designs were put forward, and local people consulted. But it occurred to me that one voice seriously missing from the consultation, was that of the master architect and planner Joseph Paxton, who conceived the original design for the park. I thought it would be interesting to imagine his point of view.
I’ve tried here to make this online interactive story unfold as a conversation, which is often considered the highest form of interactivity. The work is inspired by the ‘Eliza’ project, an interactive psuedo therapist/ counsellor which was built many years ago, and simulates a conversation (this has been expanded upon by Alexa etc):
My intentions are that the conversation presents an independent view of the situation, avoiding one side or the other, and local politics. The work tries to see above that. I hope the work is more questioning, acting on a deeper level.
In 2008, there was an ‘Occupy’ site at St Pauls in central London. There were lots of drawings and paintings sellotaped to the walls; the area became a sort of temporary public Art gallery. Works full of slogans and messages, full of passion.
It occurred to me that many people wanted to express their views in this way, and contribute their own art work to express their support and solidarity; but they couldn’t physically be there – at St Pauls.
I built an online cartoon tool to make it easy to collaboratively author your own political/satirical cartoons. Once a week I printed them, went to St Pauls and stuck them on the walls. Some well known artists contributed their work, building up a big stock of ‘ready-made’ fantastic drawings and cartoons – for everyone to remix into their own political cartoons.
The project was a collectively authored and networked satire, giving people a chance to participate/ support/ speak out/ in a creative way.
I am in the process of updating the project code to the latest version of PHP, as it doesnt work anymore. Once this is done I will post a link.
Buddy Rivers Live was a live networked performance of an automatic comedian. The first performance took place in a gallery in Bermondsey, London, in 2008. The project used live internet searches and feeds to create jokes automatically/ on the fly. These text jokes were then converted into voice via a (server based) text to speech synthesiser, and then finally relayed to the audience. This automatic process meant that Buddy told jokes forever!
The core of this project is a computer network generated comedian, capable of an endless generated network performance. His performance is even affected by user interaction (heckling), and I built in some primitive AI. This is quite early days of text to speech, so the voice sounds quite robotic.
I am in the process of updating the project code to the latest version of PHP, as it doesnt work anymore. Once this is done I will post a link.
I wanted to construct a character who could say unpredictable things and upset people. Adding sound to this project took my work into new areas. It was like bringing my creation to life. I set up a text-to-speech synthesiser on an Internet server, so each constructed joke was transformed into speech.
I wrote a backstory for this project, and published some interviews in the pre-show material, to try to drum up interest beforehand. The idea was that Buddy used to be a famous comedian, and hung around with many big stars. He used to perform often at the Leroy Club, but eventually his popularity waned and he retired to Marbella. This was to be a one-off show, to help save the club.
The ‘Tense Nervous Headaches’ project involved exploratory walks around different locations in London, mapping local electromagnetic radiation.
I hosted two exploratory walks around Crystal Palace, measuring radiation levels. Following detailed research into planning applications for the local area, the story of each mobile mast was investigated, along with the technical data for each type of transmitter installed, and the known medical effects. Participants measured radiation and drew on maps, contributing to a collaborative artwork – which was effectively a map of local mobile phone radiation.
The first exploratory walks were held in 2007, at Crystal Palace, followed by walks in 2013, for an exhibition at the Furtherfield Gallery, Finsbury Park, London. The walks proved to be very popular, and I have received many requests to do similar projects in other places.
Newscomic (2008) recycles the news, re-mixes it, subverts and distorts it. It takes live news feeds, chops them up, reworks them and places the text into speech bubbles in a comic. The result is a disjointed reading experience, where the words and pictures don’t quite match but create their own meaning from the network.
Newscomic has been described as a generative satire, that takes RSS feeds from major newspapers, uses PHP and databases to chop them up and generate interesting strings, based on the feed content and user input, then puts the resultant text into speech bubbles in a 3-panel comic format.
The result is a disjointed comic, where the words and pictures don’t quite fit but instead make their own story, blurring fact and fiction. Sometimes the story makes sense, and readers/ users always found it fun to experiment with the stories. Possibly, this early experimental idea seems to be part of what is now referred to as “Conditional Text” in recent interactive story works (Ambient Literature).
Often the stories generated are quite surreal, and can even be revealing.
This is an interactive work on the subject of ‘wreckers’. In 2007 a container ship was grounded off the coast of Devon, England, and locals looted it. My idea was to make ‘debate drawings’ – networked drawings generated by a debate on a specific topic. I wrote a script and database to combine web feeds and comments, and convert into shapes and lines.
This is what I call ‘Feed Art’ – mixing & mashing networked data into pictures to create an informed image – a ‘conflict picture’ or ‘debate drawing’. There are two sets of comments being pulled into the picture: a news feed on the subject of the wreckers, and the user comments from this site. They work to support or conflict with each other – there’s a debate going on within the picture.
This mix makes the work more connected to the subject, and gives rise to chance forms and connections to make interesting images & unintended meanings. The feed and user entries are converted into shapes, so all forms in the picture reflect the data itself.
Please note: sadly ‘The Wreckers’ isn’t working properly right now, as PHP has been updated and I need to re-work the code
Shirley Bassey Mixed Up (2006) is a generative art work – more accurately an experimental illustrated networked generative biography, covering the ups and downs of her life, successes and tragedies. The illustrations are network generated, built dynamically from Yahoo searches, and combined dynamically with my own drawings. Through specifying different searches and playing with the customisation options, readers can create a unique illustration for each page of the story, resulting in their own personalised version of the biography.
Pulling in data from the Internet and manipulating/ transforming it within a story, this work can be arguably be described as a networked narrative.
Interactive story dealing with the tragedy of the chinese migrant workers who died in Morecambe Bay, Lancashire, in 2004. The interactive approach offers a non-linear way of exploring all of the issues, and each person has a different experience, and effectively creates their own story.
There are many separate and interconnected strands to this story: the experience of the workers, the bosses exploiting cheap work, the reactions of local people, the families in China, the media/political reactions and economics.
These strands are parallel narratives through the story. Each one is small view of the bigger picture, giving a different perspective. Each page of the story is an illustration.