"System 319" at the Venice Biennale.
Marko Peljhan’s work revolves around two fundamental aspects of the world today: the technological developments in communication, transport, and surveillance; and the highly complex systems of political, economic, and military power driving such developments and employing them in administration, control, production or military applications. The potentials of technology are introduced into art as a way of confronting the systems of governance and their strategies. Peljhan’s art has thus evolved into a process involving a cartography of "signal territories," an analysis of the role of technology in society, particularly as it relates to power structures, a reflection on the possibilities of a different, creative and resistant use of technological means, and, ultimately, the creation of socially useful models of resistant behaviors in the contemporary social system. The theatrical dimension of Peljhan’s art plays a crucial role in this; his best-known project Makrolab can in this sense be interpreted as a technological laboratory and a social stage based on the concept of micro-performance.
At the Venice Biennale, Peljhan will present a work from his Resolution series. This series, which has evolved over some 20 years, proposes some specific material and applicable solutions to certain problems in society. It is the artist’s response to the state in which the world finds itself today, calling for a rediscovery of space and a utopian response to the rapid changes in the environment. In this sense, the autonomous vessel produced as part of the "Here we go again… SYSTEM 317" project is a colonizing, apocalyptic and pirating tool of sorts. In it, Peljhan brings together his vision, the potential for and the impossibility of a final exit from our rapidly deteriorating planetary conditions in a process he calls “reverse conversion.” He first employed this methodology in his "TRUST-SYSTEM" series, which focused on the conversion of cruise missile technology and later, unmanned systems for civil counter-reconnaissance. The artist proposes the construction of a counter-privateering machine intended for the days when the world’s great empires find themselves, once again, in confrontation—and one characterized by a grave lack of responsibility together with great destructive potential.
The X-43A Hypersonic Experimental (Hyper-X) Vehicle in Benefield Aenechoic Facility at Edwards Air Force Base radio January 2000. Photo: Tom Tschida. Image courtesy of NASA.
Morphology Synthesis: Exploring sensory wearable design computation
As our generation encounters unprecedented ecological challenges, the need to respond to these circumstances from all disciplines is urgently demanded. Using cutting-edge tools, such as soft robotic fabrication and sensory technologies, researchers can find new methods to face these complex problems. A morphological approach can define new architectural typologies and orchestrate movement through music and sound sensor. In this course, we will explore the concept of morphology in architecture, and study the evolution of forms within nature and building environments. You will learn novel concepts, such as pneumatic architecture, inflatable material, and deployable structure. Through hands-on experiments, you will understand how to fabricate a wearable device with soft materials and how to compute a biomimetic design with digital modeling tools. By the end of the course, you will design wearable technology for the human body, and integrate the multimedia devices with sensors.
(Keywords: multimedia design, biomimetic engineering, morphogenetic computation, pneumatic architecture, human-machine interaction)
Lavin is a conceptual response to Ground Truth in the modern AI age. From a neural network (NN) trained to recognize thousands of objects to a NN can only generate binary outputs, each NN, like human beings, has its own understanding of the real world, even the inputs are the same. LAVIN provides an immersive responsive experience to visually explore one understanding of a NN in which the real world is mapping to 50 daily objects. LAVIN constantly analyzes the real world via a camera and outputs semantic interpretations, which navigate the audience in a virtual world consisting of all the fluid abstract structures that designed and animated based on the photogrammetry of daily objects that the NN can recognize.
Weidi and Jieliang will also speak at the event.
"airMorphologies" is a voice control pneumatic wearable device for people living in an air-polluted environment. The goal is to rethink the future of human body shape, body communication, and the social space around us.
Humanity’s digital footprints in the vast data universe are duplicatable, transferrable and mutable. Deletion has become much harder than throwing a piece of paper into a shredder, which was first invented over a hundred years ago. Photos, videos, geographical tag or just simple texts living on social media platforms as the virtual presence of digitized human memories strengthen the power of machine computation and analysis while underlying the control from us. When we try to preserve or delete our own stories in the digital landscape, do we still have the authorship of them? Are they in a constant shift of meaning and representation?
"Repository" is a virtual reality experience created around the issue and question of data authorship and data oblivion. It builds a world of data in motion merging the structure of a server farm (A place physically store data) with a paper shredder (A machine deconstruct data). Repository gradually transforms from a surreal bank safely stores memories into a space filled with floating shreds of letters and characters through assembling and fragmenting varies conversations borrowed from social media. Its non-linear narrativity, interactive experimental sound and surreal aesthetic provide a conceptualization of an alternative model of human-machine interaction, and questions whether we have the right to be forgotten, at the same time as the right to be remembered?
The C X U Gallery is at 4950 Wilshire Blvd, Los Angeles, CA 90010.
Brand Logo Sonification is an audiovisual installation that exists on the border between old and analog technology and new digital practices. It mainly represents the global top 10 brand ranking data and logos from 2000 to 2018 through an oscilloscope, which is an electronic analog instrument, by the computational digital processes of data visualization and sonification. Based on OpenCV image processing and a vector synthesis technique, it extracts the contours of logos and converts them to audio that can be rendered on a vector display, such as an oscilloscope or laser. While the contours chiefly determine sound textures, the ranking data contributes to the whole composition. Through this process combining analog and digital practices, this work also reveals that our society is getting more affected by digital and IT companies, such as Google, Apple, Amazon, or Facebook, than brands in traditional industries, as their ranking in the data becomes highly positioned over time.
Borrowed Scenery is a virtual reality experience that constructs an autobiographical spatial narrative that points to the deconstruction and reconstruction of cultural identity through experimental visualization of image data.
Being exposed to diverse cultures enables us to continuously portray our own cultural identities through collaging our collective memories cross cultural boundaries. In this project, I utilized autobiographical threads to evoke the universal experience of alienation and displacement. Photographs of eastern and western motifs, symbols and landscapes, which are collected as raw source image data, are captured volumetrically mostly in two places - my hometown Suzhou (China) and my current living place Los Angeles (US). Visualizing those two groups of image data in the VR world change the way we percept the scalability of intersections, as we could navigate the space to play with different points of view. A non-linear narrative is created when the intelligent agents ( generated by image data) crossed over dynamically in the virtual world.
The visualization methodology includes photogrammetry, shader programming, and intelligent system development. The pixel coordinates from those sets of image data are reconstructed as 3d coordinates of points on structures through automatic calibration. A customized shader is designed for the textures of those data-driven structures, which displaces and animates processed image pixels on vertices with layers of customized algorithms and 3d Voronoi Tessellation, collectively generates fragmental geometrical forms and fluid chaos. Those forms are programmed as intelligent agents to seek and wander in the environment, collide with others, die and get reborn. The system simulates how our cultural identities are evolved and how the ‘border’ is dynamically disturbed and reformed in the aesthetic and subject matter.
Abstract
This paper introduces ARLooper, an AR-based iOS application for multi-user sound recording and performance, that aims to explore the possibility of actively using mobile AR technology in creating novel musical interfaces and collaborative audiovisual experience. ARLooper allows the user to record sound through microphones in mobile devices and, at the same time, visualizes and places recorded sounds as 3D waveforms in an AR space. The user can play, modify, and loop the recorded sounds with several audio filters attached to each sound. Since ARLooper generates the world map information through iOS ARkit’s tracking technique called visual-inertial odometry which tracks the real world and a correspondence between real and AR spaces, it enables multiple users to connect to the same AR space by sharing and synchronize the world map data. In this shared AR space, the user can see each other’s 3D waveforms and activities, such as selection and manipulation of them, as a result, having a potential of collaborative AR performance.
The work we exhibited was based on our paper "Spatiotemporal Haptic Effects from a Single Actuator via Spectral Control of Cutaneous Wave Propagation", authored by B. Dandu, Y. Shao, A. Stanley, and Y. Visell. This year’s World Haptics Conference was the largest yet, with over 700 attendees.
"Cacophonic Choir" is an interactive sound installation composed of nine individual voices. Altogether, from a distance, they form an unintelligible choir. Within this choir, each voice has a unique story to tell. These narratives are not static, however; they transform as a visitor approaches. Fragmented and distorted at first, the voices respond to the visitor’s bodily presence, and their narratives become clearer and more coherent as one gets closer. The full narrative is revealed only when one is in very close proximity to the given voice. These recitations are based on the anonymous accounts of more than five hundred sexual assault survivors that were shared on The When You're Ready Project, a website where survivors of sexual violence can share their stories and have their voices heard. #MeToo.
Reincarnation is a virtual reality art experience, based on French surrealist painter Yves Tanguy’s paintings combined with the creation of a supernatural ecosystem. It aims to amplify the experience of original artworks by adopting an agent-based spatial narrative and a crossmodal surrealist aesthetic paradigm for visual, audio, motion, and interaction.
https://s2019.siggraph.org/presentation/?id=var_221&sess=sess342
Making Visible the Invisible is a six-screen, dynamic data visualization artwork at the Seattle Public Library. It visualizes patrons’ library checkouts received by the hour through four different animations to give a sense of community interests. The artwork was activated in September 2005 for a 10-year operation and was extended.
https://s2019.siggraph.org/presentation/?sess=sess183&id=artps_139#038;id=artps_139
This VR project is a conceptual response to "Ground Truth" in the modern AI age. From a neural network (NN) that is trained to recognize thousands of objects, to a NN that can only generate binary outputs, each NN, like human beings, has its own understanding of the real world, even when the inputs are the same. LAVIN provides an immersive responsive experience, that allows you to visually explore one understanding of a NN in which the real world is mapping to less than a hundred daily objects. LAVIN constantly analyzes the real world via a camera, and outputs semantic interpretations in which the audience navigates, in a virtual world that consists of all of the fluid abstract structures that are designed and animated based on the photogrammetry of daily objects that the NN can recognize.
"BeHave" by Sihwa Park
Abstract
This paper presents BeHAVE, a web-based audiovisual piece that explores a way to reveal the author’s mobile phone use behavior through multimodal data representation, considering the concept of indexicality in data visualization and sonification. It visualizes the spatiality and overall trend of mobile phone use data as a geographical heatmap visualization and a heatmap chart. On top of that, BeHAVE presents a mode for temporal data exploration to make a year of data perceivable in a short period and represent the temporality of data. Based on a microsound synthesis technique, it also sonifies data to simultaneously evoke visual and auditory perception in this mode. As a way of indexical visualization, BeHAVE also suggests an approach that represents data through mobile phones simultaneously by using WebSocket. Ultimately, BeHAVE attempts to not only improve the perception of self-tracking data but also arouse aesthetic enjoyment through a multimodal data portrait as a means of self-representation.
"Etherial" will bring the quantum form into the material, through virtual reality, spatial augmented reality and material form. The work will consist of two windows into the virtual that will ultimately control the various visual/sonic quantum forms, a SAR window in a completely immersive VR space that will allow one to sculpt quantum mechanics in real time, and a physically rendered sculpture that will be tracked with gestural sensors so one can perform the work from the sculpture as well. Two controllers into a completely immersive VR space that will allow performers to sculpt quantum mechanics in real time in total synchrony with one another and the virtual environment that they control.
In keeping with the theme of "LUX", the quantum, revealed, the hydrogen-like atom combinations feature light-emitting wave function combinations that move toward the science of the phenomenon, while the quantum, suggests the ethereal nature of spirit in the form of light, EHERIAL/IMMUTABLE – to touch the untouchable.
B. Dandu, Y. Shao, A. Stanley, Y. Visell, Spatiotemporal Haptic Effects from a Single Actuator via Spectral Control of Cutaneous Wave Propagation. Proc. IEEE World Haptics Conference, To Appear, 2019. Best Paper Award Nomination.
G. Reardon, Y. Shao, B. Dandu, W. Frier, B. Long, O. Georgiou, Y. Visell, Cutaneous Wave Propagation Shapes Tactile Motion: Evidence from Air-Coupled Ultrasound. Proc. IEEE World Haptics Conference, To Appear, 2019.
A. Kawazoe, M. Di Luca, Y. Visell, Tactile Echoes: A Wearable System for Tactile Augmentation of Objects. Proc. IEEE World Haptics Conference, To Appear, 2019.
T. Hachisu, Y. Shao, K. Suzuki, Y. Visell, Empirical Study on Transfer Functions from Wrists to Hands. IEEE World Haptics Conference, Work-in-Progress, To Appear, 2019.
"Touching Affectivity" is an interactive sculpture whose vocalizations are sonifications of the way it is touched. The creature experiences its world through pressure sensors and handmade conductive fur, which can detect different types of touch. Exhibit guests can interact with the creature while listening to the creature’s response. Aspects of the conductive fur signal affect the speed, volume, filters, and the timbre of the synthesized sound. The parameters chosen for the sound generation algorithm are grounded in prior research in emotive vocal communication and emotive music. This work explores how gesture can be used to produce sound and communicate emotion.
The course, titled "In the Digital Age - Experiencing Architecture and Music Through STEM", is an introduction to Media Arts and Technology through the lens of architecture and music, and adds humanities (H) and arts (A) to the STEM model, to produce the THEMAS model. The SERA program introduces qualified high school students to the research enterprise through project-based, directed research in STEM related fields, including machine learning, marine biology, evolutionary biology, global conflicts, and media arts & technology.
The course challenges what you think architecture and music are by examining how the intersection of these topics evolved over time through the lens of human experience and the digital age. For example, the way in which theme parks are intentionally designed or the role that a musical score plays in movies to enhance or manipulate the audience's experience. You will learn the basic concepts of digital architecture and computer music through exercises using physical and digital modeling, 3D fabrication, haptics (touch sound), and interactive design highlighting how new media technologies and fabrication tools have allowed for the integration of STEM and the fine arts. Students will attend a field recording workshop and develop a hands-on studio project to learn creative techniques in music composition and sound making. In addition, students will develop oral communication and formal presentation skills through a series of workshop project presentations. By the end of the course, you will develop the methodologies for an interdisciplinary research project. This is an excellent opportunity for participants interested in both science and art, to increase their skills and knowledge towards their college education.
The photographs focused on the intersection of noise and signal in the news from the mid 1980s. The images were acquired into the SBMA collection in 2017.
The "Authority of the News" series, Fuji Inkjet, 1986.
Reincarnation is a virtual reality art experience, based on French surrealist painter Yves Tanguy's paintings in combination with my creation of pseudo-natural beings. Reincarnation intends to amplify the experience of original artworks by creating an agent-based spatial narrative and a surreal aesthetic for visual, audio, motion, and interaction. Reincarnation is also an artistic search of animism in various matters, and it challenges the anthropocentric worldview in an artificial intelligence era. By providing a multi-perspective experience, it calls for people's empathy for human beings as well as other organic creations, artifacts, places, and abstract entities.
currentsnewmedia.org/work/reincarnation-virtual-reality-recreation-of-yves-tanguys-world
"Come Hither to Me!" is an interactive robotic performance piece, which examines the emotive social interaction between an audience and a robot. Our interactive robot attempts to communicate and flirt with audience members in the gallery. The robot uses feedback from sensors, auditory data, and computer vision techniques to learn about the participants and inform its conversation. The female robot approaches the audience, picks her favorites, and starts charming them with seductive comments, funny remarks, backhanded compliments, and personal questions. We are interested in evoking emotions in the participating audience through their interactions with the robot. This artwork strives to invert gender roles and stereotypical expectations in flirtatious interactions. The performative piece explores the dynamics of social communication, objectification of women, and the gamification of seduction. The robot reduces flirtation to an algorithm, codifying pick-up lines and sexting paradigms.