The title of her presentation is "Using the Creative Process as a Computational Framework for Unfolding Complex Systems". In professor Kuchera-Morin's research, one picture is worth approximately 60 million numbers. How can one find patterns in complex information and work with the information creatively and intuitively leading to new and unique innovation? Using the compositional framework within the AlloSphere, one of the largest display devices in the world for multi-modal data representation and an ideal platform for designing our n-dimensional sketching system, we have developed a series of prototypes and solutions for immersive multimodal mappings of complicated scientific data.
At this year's event, presentations were given by MAT professor JoAnn Kuchera-Morin, Director of the Allosphere Research Group at the University of California Santa Barbara, and MAT alumna Yoon Chung Han, an assistant professor in the Department of Design at San Jose State University.
Professor Kuchera-Morin's presentation was titled "Composing and Performing Complex Systems: From the Quantum to the Cosmological".
Professor Chung Han's presentation was titled "The Roads on Your Veins: Revealing Hidden Narratives in Human Veins and Visualizing Veins and Map Data Using Technology".
The 109th College Art Association's annual conference was held from February 10-13, 2021.
The event can be viewed here: www.youtube.com/watch?v=3PJ0UNUGiYo
"Uncertain Facing" is a data-driven, interactive audiovisual installation that aims to represent the uncertainty of data points of which their positions in 3D space are estimated by machine learning techniques. It also tries to raise concerns about the possibility of the unintended use of machine learning with synthetic/fake data.
Located in La Cumbre Plaza at 120 South Hope Ave Suite F119, the museum creates a hands-free interactive experience that explores the next generation of media arts. The art pieces are primarily by local artists, including one by MAT students Xindi Kang and Rodney Duplessis titled "Oscilla", in which a person speaks into a microphone, and watches the frequencies of his or her voice displayed in multi-color on a large screen.
Read more about MSME in this Santa Barbara Noozhawk article.
In the article, professor Legrady discusses his art, his graduate course on data visualization, artificial intelligence, big data, and the impact of the pandemic on viewing art.
The lab also received honorable mentions for two papers on interpersonal touch by authors Hachisu, Reardon, Shao, and Suzuki and Dinulescu, Reardon, and Topp.
The William and Meredith Saunderson Prizes for Emerging Artists consist of three awards of five thousand dollars each to support young emerging visual artists whose practice shows potential and is deemed to have the determination and talent to contribute to the legacy of art in Canada.
MYRIOI ("myriad particles") is the third immersive media work in a series of quantum media compositions under the direction of Dr. JoAnn Kuchera-Morin, the Director of the AlloSphere Research Group. It offers a shared experience that allows interaction with the world of the quantum: waveforms, light - the pure essence of form and shape. MYRIOI will share the experience of being immersed and interacting with myriads of particles that create currents becoming waveforms to understand and to experience viscerally the quantum, while sharing and interacting with the narrative. MYRIOI will also be featured in Leonardo, the leading international journal, published by the MIT Press, covering the application of contemporary science and technology to the arts.
The SIGGRAPH Conference is the world’s largest and most influential conference on the theory and practice of computer graphics and interactive techniques, inspiring progress through education, excellence, and interaction.
For more information about the AlloSphere, visit www.allosphere.ucsb.edu
"Cacophonic Choir", an Art Papers and Art Gallery selection, is an interactive sound installation aimed at bringing attention to the first-hand stories of sexual assault survivors. This is achieved through rethinking the relationship between the narrator and the listener; in this case, the survivor and the public, as well as the survivor’s own account of their experience and its public reflection and distortion. In realizing this work, we employed several digital media techniques, including machine learning, physical computing, digital audio signal processing, and digital design and fabrication.
The award is for an upcoming publication titled "James Bay Cree Culture & Architecture", a monograph of documentary photographs created in four coastal Cree First Nation villages in sub-arctic James Bay in 1973. The publication is to consist of introductory texts, approximately 180 black and white photographs of everyday scenes in the Cree communities just prior to their legal negotiations over infrastructure autonomy and land rights in response to the construction of the James Bay Hydro-electric project on traditional hunting lands.
Photos: George Legrady. James Bay Cree, Fort George, James Bay, 1973, Quebec, Canada.
The first project, "Volume of Voids", is inspired by the current Covid-19 pandemic, and is a set of 3D printed artifacts that explore the question "When people maintain a distance from objects and other people, what is the volume of voids between them?"
Volume of Voids, 2020
The second project, "Cangjie", is "an immersive exploration in semantic human-machine reality generated by an intelligent system in real-time through perceiving the real-world via a camera [located in the exhibition space]".
For more information about the IEEE VIS Arts Program 2020, visit visap.net.
Humans and machines are in constant conversations. Humans start the dialogue by using programming languages that will be compiled to binary digits that machines can interpret. However, Intelligent machines today are not only observers of the world, but they also make their own decisions. If A.I imitates human beings to create a symbolic system to communicate based on their own understandings of the universe and start to actively interact with us, how will this recontextualize and redefine our coexistence in this intertwined reality?
This VR project provides an immersive exploration in semantic human-machine reality generated by an intelligent system in real-time through perceiving the real-world via a camera [located in the exhibition space]. Inspired by Cangjie, an ancient Chinese legendary historian (c.2650 BCE), invented Chinese characters based on the characteristics of everything on the earth, we trained a neural network that we call Cangjie, to learn the constructions and principles of all the Chinese characters. It perceives the surroundings and transforms it into a collage of unique symbols made of Chinese strokes. The symbols produced through the lens of Cangjie, tangled with the imagery captured by the camera are visualized algorithmically as abstract pixelated semiotics, continuously evolving and compositing an ever-changing poetic virtual reality. Cangjie is not only a conceptual response to the tension and fragility in the coexistence of humans and machines but also an artistic imagination of our future language in this artificial intelligent era.
Disciplines: Biomimicry, Pneumatic Architecture, Media Arts & Technology, Human-Computer Interaction.
Conventional wearable robots designed with rigid materials, such as metal and hard plastic, are often limited by their lower flexibility, functionality, and biological compatibility. With sensory technology and novel materials, can we rethink the wearable device as a soft and organic interface? Sensing the world is connecting the body (or mechanics), the brain (or controller), and the environment. In this course, we will focus on the emerging field of soft robotics, bringing together research and applications of wearable technology. We will introduce the concept of computational morphology in soft robotics and study the design principles using 3D modeling tools. Specific topics include body architecture, pneumatic architecture, soft mechanism, smart material, biomimicry design, geometrical morphology, sensory technology, embodied intelligence, wearable computing, and human-robot interaction. We will also discuss the soft wearable applications in art, communication, fitness, entertainment, medicine, and sports, and so on. Through a series of hands-on activities, students will explore digital fabrication, soft motion mechanisms, soft actuation, and wearable sensors. By the end of the course, students will design, modeling, and build of a wearable device, and analysis the human-robot interaction.
Director Dr. JoAnn Kuchera-Morin, chief designer of the three-story facility on the UC Santa Barbara campus, says the intersection of science, technology, engineering, arts, and mathematics has facilitated exciting new avenues for scientific discovery.
"But it is their strong desire to welcome research partners and collaborations of all kinds, that leads the AlloSphere to make a real difference in the local community".
Goleta’s Finest is a 70-year old tradition honoring remarkable individuals whose contributions have enhanced the Goleta community.
The 2019 award recipients will be honored with a formal celebration on Nov. 23 from 6 to 9:30 p.m. at the beautiful Ritz-Carlton Bacara.
The event is accessible to everyone with no registration required.
The live event will feature video presentations of the best papers from the conference, an awards ceremony, and previews of upcoming haptics conferences from around the world.
The event will be archived here:
For more information and video presentations of all of the 77 technical papers, visit:
transmediale 2020 festival in Berlin, Germany
January 28 - March 1
Information about the "Adversarial Hacking" workshop can be found here:
Fabian will also speak at the transmediale 2020 symposium on Neural Network Cultures on February 1st 2020 at the Volksbühne in Berlin:
"System 319" at the Venice Biennale.
Marko Peljhan’s work revolves around two fundamental aspects of the world today: the technological developments in communication, transport, and surveillance; and the highly complex systems of political, economic, and military power driving such developments and employing them in administration, control, production or military applications. The potentials of technology are introduced into art as a way of confronting the systems of governance and their strategies. Peljhan’s art has thus evolved into a process involving a cartography of "signal territories," an analysis of the role of technology in society, particularly as it relates to power structures, a reflection on the possibilities of a different, creative and resistant use of technological means, and, ultimately, the creation of socially useful models of resistant behaviors in the contemporary social system. The theatrical dimension of Peljhan’s art plays a crucial role in this; his best-known project Makrolab can in this sense be interpreted as a technological laboratory and a social stage based on the concept of micro-performance.
At the Venice Biennale, Peljhan will present a work from his Resolution series. This series, which has evolved over some 20 years, proposes some specific material and applicable solutions to certain problems in society. It is the artist’s response to the state in which the world finds itself today, calling for a rediscovery of space and a utopian response to the rapid changes in the environment. In this sense, the autonomous vessel produced as part of the "Here we go again… SYSTEM 317" project is a colonizing, apocalyptic and pirating tool of sorts. In it, Peljhan brings together his vision, the potential for and the impossibility of a final exit from our rapidly deteriorating planetary conditions in a process he calls “reverse conversion.” He first employed this methodology in his "TRUST-SYSTEM" series, which focused on the conversion of cruise missile technology and later, unmanned systems for civil counter-reconnaissance. The artist proposes the construction of a counter-privateering machine intended for the days when the world’s great empires find themselves, once again, in confrontation—and one characterized by a grave lack of responsibility together with great destructive potential.
The X-43A Hypersonic Experimental (Hyper-X) Vehicle in Benefield Aenechoic Facility at Edwards Air Force Base radio January 2000. Photo: Tom Tschida. Image courtesy of NASA.
Morphology Synthesis: Exploring sensory wearable design computation
As our generation encounters unprecedented ecological challenges, the need to respond to these circumstances from all disciplines is urgently demanded. Using cutting-edge tools, such as soft robotic fabrication and sensory technologies, researchers can find new methods to face these complex problems. A morphological approach can define new architectural typologies and orchestrate movement through music and sound sensor. In this course, we will explore the concept of morphology in architecture, and study the evolution of forms within nature and building environments. You will learn novel concepts, such as pneumatic architecture, inflatable material, and deployable structure. Through hands-on experiments, you will understand how to fabricate a wearable device with soft materials and how to compute a biomimetic design with digital modeling tools. By the end of the course, you will design wearable technology for the human body, and integrate the multimedia devices with sensors.
(Keywords: multimedia design, biomimetic engineering, morphogenetic computation, pneumatic architecture, human-machine interaction)
Lavin is a conceptual response to Ground Truth in the modern AI age. From a neural network (NN) trained to recognize thousands of objects to a NN can only generate binary outputs, each NN, like human beings, has its own understanding of the real world, even the inputs are the same. LAVIN provides an immersive responsive experience to visually explore one understanding of a NN in which the real world is mapping to 50 daily objects. LAVIN constantly analyzes the real world via a camera and outputs semantic interpretations, which navigate the audience in a virtual world consisting of all the fluid abstract structures that designed and animated based on the photogrammetry of daily objects that the NN can recognize.
Weidi and Jieliang will also speak at the event.
"airMorphologies" is a voice control pneumatic wearable device for people living in an air-polluted environment. The goal is to rethink the future of human body shape, body communication, and the social space around us.
Humanity’s digital footprints in the vast data universe are duplicatable, transferrable and mutable. Deletion has become much harder than throwing a piece of paper into a shredder, which was first invented over a hundred years ago. Photos, videos, geographical tag or just simple texts living on social media platforms as the virtual presence of digitized human memories strengthen the power of machine computation and analysis while underlying the control from us. When we try to preserve or delete our own stories in the digital landscape, do we still have the authorship of them? Are they in a constant shift of meaning and representation?
"Repository" is a virtual reality experience created around the issue and question of data authorship and data oblivion. It builds a world of data in motion merging the structure of a server farm (A place physically store data) with a paper shredder (A machine deconstruct data). Repository gradually transforms from a surreal bank safely stores memories into a space filled with floating shreds of letters and characters through assembling and fragmenting varies conversations borrowed from social media. Its non-linear narrativity, interactive experimental sound and surreal aesthetic provide a conceptualization of an alternative model of human-machine interaction, and questions whether we have the right to be forgotten, at the same time as the right to be remembered?
The C X U Gallery is at 4950 Wilshire Blvd, Los Angeles, CA 90010.
Brand Logo Sonification is an audiovisual installation that exists on the border between old and analog technology and new digital practices. It mainly represents the global top 10 brand ranking data and logos from 2000 to 2018 through an oscilloscope, which is an electronic analog instrument, by the computational digital processes of data visualization and sonification. Based on OpenCV image processing and a vector synthesis technique, it extracts the contours of logos and converts them to audio that can be rendered on a vector display, such as an oscilloscope or laser. While the contours chiefly determine sound textures, the ranking data contributes to the whole composition. Through this process combining analog and digital practices, this work also reveals that our society is getting more affected by digital and IT companies, such as Google, Apple, Amazon, or Facebook, than brands in traditional industries, as their ranking in the data becomes highly positioned over time.
Borrowed Scenery is a virtual reality experience that constructs an autobiographical spatial narrative that points to the deconstruction and reconstruction of cultural identity through experimental visualization of image data.
Being exposed to diverse cultures enables us to continuously portray our own cultural identities through collaging our collective memories cross cultural boundaries. In this project, I utilized autobiographical threads to evoke the universal experience of alienation and displacement. Photographs of eastern and western motifs, symbols and landscapes, which are collected as raw source image data, are captured volumetrically mostly in two places - my hometown Suzhou (China) and my current living place Los Angeles (US). Visualizing those two groups of image data in the VR world change the way we percept the scalability of intersections, as we could navigate the space to play with different points of view. A non-linear narrative is created when the intelligent agents ( generated by image data) crossed over dynamically in the virtual world.
The visualization methodology includes photogrammetry, shader programming, and intelligent system development. The pixel coordinates from those sets of image data are reconstructed as 3d coordinates of points on structures through automatic calibration. A customized shader is designed for the textures of those data-driven structures, which displaces and animates processed image pixels on vertices with layers of customized algorithms and 3d Voronoi Tessellation, collectively generates fragmental geometrical forms and fluid chaos. Those forms are programmed as intelligent agents to seek and wander in the environment, collide with others, die and get reborn. The system simulates how our cultural identities are evolved and how the ‘border’ is dynamically disturbed and reformed in the aesthetic and subject matter.
This paper introduces ARLooper, an AR-based iOS application for multi-user sound recording and performance, that aims to explore the possibility of actively using mobile AR technology in creating novel musical interfaces and collaborative audiovisual experience. ARLooper allows the user to record sound through microphones in mobile devices and, at the same time, visualizes and places recorded sounds as 3D waveforms in an AR space. The user can play, modify, and loop the recorded sounds with several audio filters attached to each sound. Since ARLooper generates the world map information through iOS ARkit’s tracking technique called visual-inertial odometry which tracks the real world and a correspondence between real and AR spaces, it enables multiple users to connect to the same AR space by sharing and synchronize the world map data. In this shared AR space, the user can see each other’s 3D waveforms and activities, such as selection and manipulation of them, as a result, having a potential of collaborative AR performance.
The work we exhibited was based on our paper "Spatiotemporal Haptic Effects from a Single Actuator via Spectral Control of Cutaneous Wave Propagation", authored by B. Dandu, Y. Shao, A. Stanley, and Y. Visell. This year’s World Haptics Conference was the largest yet, with over 700 attendees.
"Cacophonic Choir" is an interactive sound installation composed of nine individual voices. Altogether, from a distance, they form an unintelligible choir. Within this choir, each voice has a unique story to tell. These narratives are not static, however; they transform as a visitor approaches. Fragmented and distorted at first, the voices respond to the visitor’s bodily presence, and their narratives become clearer and more coherent as one gets closer. The full narrative is revealed only when one is in very close proximity to the given voice. These recitations are based on the anonymous accounts of more than five hundred sexual assault survivors that were shared on The When You're Ready Project, a website where survivors of sexual violence can share their stories and have their voices heard. #MeToo.
Reincarnation is a virtual reality art experience, based on French surrealist painter Yves Tanguy’s paintings combined with the creation of a supernatural ecosystem. It aims to amplify the experience of original artworks by adopting an agent-based spatial narrative and a crossmodal surrealist aesthetic paradigm for visual, audio, motion, and interaction.
Making Visible the Invisible is a six-screen, dynamic data visualization artwork at the Seattle Public Library. It visualizes patrons’ library checkouts received by the hour through four different animations to give a sense of community interests. The artwork was activated in September 2005 for a 10-year operation and was extended.
This VR project is a conceptual response to "Ground Truth" in the modern AI age. From a neural network (NN) that is trained to recognize thousands of objects, to a NN that can only generate binary outputs, each NN, like human beings, has its own understanding of the real world, even when the inputs are the same. LAVIN provides an immersive responsive experience, that allows you to visually explore one understanding of a NN in which the real world is mapping to less than a hundred daily objects. LAVIN constantly analyzes the real world via a camera, and outputs semantic interpretations in which the audience navigates, in a virtual world that consists of all of the fluid abstract structures that are designed and animated based on the photogrammetry of daily objects that the NN can recognize.
"BeHave" by Sihwa Park
This paper presents BeHAVE, a web-based audiovisual piece that explores a way to reveal the author’s mobile phone use behavior through multimodal data representation, considering the concept of indexicality in data visualization and sonification. It visualizes the spatiality and overall trend of mobile phone use data as a geographical heatmap visualization and a heatmap chart. On top of that, BeHAVE presents a mode for temporal data exploration to make a year of data perceivable in a short period and represent the temporality of data. Based on a microsound synthesis technique, it also sonifies data to simultaneously evoke visual and auditory perception in this mode. As a way of indexical visualization, BeHAVE also suggests an approach that represents data through mobile phones simultaneously by using WebSocket. Ultimately, BeHAVE attempts to not only improve the perception of self-tracking data but also arouse aesthetic enjoyment through a multimodal data portrait as a means of self-representation.
"Etherial" will bring the quantum form into the material, through virtual reality, spatial augmented reality and material form. The work will consist of two windows into the virtual that will ultimately control the various visual/sonic quantum forms, a SAR window in a completely immersive VR space that will allow one to sculpt quantum mechanics in real time, and a physically rendered sculpture that will be tracked with gestural sensors so one can perform the work from the sculpture as well. Two controllers into a completely immersive VR space that will allow performers to sculpt quantum mechanics in real time in total synchrony with one another and the virtual environment that they control.
In keeping with the theme of "LUX", the quantum, revealed, the hydrogen-like atom combinations feature light-emitting wave function combinations that move toward the science of the phenomenon, while the quantum, suggests the ethereal nature of spirit in the form of light, EHERIAL/IMMUTABLE – to touch the untouchable.
B. Dandu, Y. Shao, A. Stanley, Y. Visell, Spatiotemporal Haptic Effects from a Single Actuator via Spectral Control of Cutaneous Wave Propagation. Proc. IEEE World Haptics Conference, To Appear, 2019. Best Paper Award Nomination.
G. Reardon, Y. Shao, B. Dandu, W. Frier, B. Long, O. Georgiou, Y. Visell, Cutaneous Wave Propagation Shapes Tactile Motion: Evidence from Air-Coupled Ultrasound. Proc. IEEE World Haptics Conference, To Appear, 2019.
A. Kawazoe, M. Di Luca, Y. Visell, Tactile Echoes: A Wearable System for Tactile Augmentation of Objects. Proc. IEEE World Haptics Conference, To Appear, 2019.
T. Hachisu, Y. Shao, K. Suzuki, Y. Visell, Empirical Study on Transfer Functions from Wrists to Hands. IEEE World Haptics Conference, Work-in-Progress, To Appear, 2019.
"Touching Affectivity" is an interactive sculpture whose vocalizations are sonifications of the way it is touched. The creature experiences its world through pressure sensors and handmade conductive fur, which can detect different types of touch. Exhibit guests can interact with the creature while listening to the creature’s response. Aspects of the conductive fur signal affect the speed, volume, filters, and the timbre of the synthesized sound. The parameters chosen for the sound generation algorithm are grounded in prior research in emotive vocal communication and emotive music. This work explores how gesture can be used to produce sound and communicate emotion.
The course, titled "In the Digital Age - Experiencing Architecture and Music Through STEM", is an introduction to Media Arts and Technology through the lens of architecture and music, and adds humanities (H) and arts (A) to the STEM model, to produce the THEMAS model. The SERA program introduces qualified high school students to the research enterprise through project-based, directed research in STEM related fields, including machine learning, marine biology, evolutionary biology, global conflicts, and media arts & technology.
The course challenges what you think architecture and music are by examining how the intersection of these topics evolved over time through the lens of human experience and the digital age. For example, the way in which theme parks are intentionally designed or the role that a musical score plays in movies to enhance or manipulate the audience's experience. You will learn the basic concepts of digital architecture and computer music through exercises using physical and digital modeling, 3D fabrication, haptics (touch sound), and interactive design highlighting how new media technologies and fabrication tools have allowed for the integration of STEM and the fine arts. Students will attend a field recording workshop and develop a hands-on studio project to learn creative techniques in music composition and sound making. In addition, students will develop oral communication and formal presentation skills through a series of workshop project presentations. By the end of the course, you will develop the methodologies for an interdisciplinary research project. This is an excellent opportunity for participants interested in both science and art, to increase their skills and knowledge towards their college education.
The photographs focused on the intersection of noise and signal in the news from the mid 1980s. The images were acquired into the SBMA collection in 2017.
The "Authority of the News" series, Fuji Inkjet, 1986.
Reincarnation is a virtual reality art experience, based on French surrealist painter Yves Tanguy's paintings in combination with my creation of pseudo-natural beings. Reincarnation intends to amplify the experience of original artworks by creating an agent-based spatial narrative and a surreal aesthetic for visual, audio, motion, and interaction. Reincarnation is also an artistic search of animism in various matters, and it challenges the anthropocentric worldview in an artificial intelligence era. By providing a multi-perspective experience, it calls for people's empathy for human beings as well as other organic creations, artifacts, places, and abstract entities.
"Come Hither to Me!" is an interactive robotic performance piece, which examines the emotive social interaction between an audience and a robot. Our interactive robot attempts to communicate and flirt with audience members in the gallery. The robot uses feedback from sensors, auditory data, and computer vision techniques to learn about the participants and inform its conversation. The female robot approaches the audience, picks her favorites, and starts charming them with seductive comments, funny remarks, backhanded compliments, and personal questions. We are interested in evoking emotions in the participating audience through their interactions with the robot. This artwork strives to invert gender roles and stereotypical expectations in flirtatious interactions. The performative piece explores the dynamics of social communication, objectification of women, and the gamification of seduction. The robot reduces flirtation to an algorithm, codifying pick-up lines and sexting paradigms.