RSVP for UIST Practice Talks (10/10 @ 11:30 AM)
On Tuesday (10/10 @ 11:30 AM. ​North Quad, Ehrlicher Room NQ 3100), Sang Won Lee and Anil Çamci from University of Michigan will give practice talks for their upcoming UIST presentations.

INVISO: A Cross-platform User Interface for Creating Virtual Sonic Environments by Anil Çamci (2017)

Speaker Bio:

I am an Assistant Professor of Performing Arts Technology at the University of Michigan. My work investigates new tools and theories for multimodal worldmaking using a variety of media ranging from electronic music to virtual reality. Previously, I was a postdoctoral research associate at the University of Illinois at Chicago's Electronic Visualization Laboratory, where I lead research projects on human-computer interaction and immersive audio in virtual reality contexts. Prior to this appointment, I worked as a faculty member of the Istanbul Technical University, Center for Advanced Studies in Music, where I founded the Sonic Arts program. I completed my PhD at Leiden University in affiliation with the Institute of Sonology in The Hague, and the Industrial Design Department at Delft University of Technology. I hold an MSc degree in Multimedia Engineering from the Media Arts and Technology Department at the University of California, Santa Barbara. My work has been presented throughout the world in leading journals, conferences, concerts and exhibitions. I have been granted several awards and scholarships, including the Audio Engineering Society Fellowship, and the ACM CHI Artist Grant.

Abstract:

The predominant interaction paradigm of current audio spatialization tools, which are primarily geared towards expert users, imposes a design process in which users are characterized as stationary, limiting the application domain of these tools. Navigable 3D sonic virtual realities, on the other hand, can support many applications ranging from soundscape prototyping to spatial data representation. Although modern game engines provide a limited set of audio features to create such sonic environments, the interaction methods are inherited from the graphical design features of such systems, and are not specific to the auditory modality. To address such limitations, we introduce INVISO, a novel web-based user interface for designing and experiencing rich and dynamic sonic virtual realities. Our interface enables both novice and expert users to construct complex immersive sonic environments with 3D dynamic sound components. INVISO is platform-independent and facilitates a variety of mixed reality applications, such as those where users can simultaneously experience and manipulate a virtual sonic environment. In this paper, we detail the interface design considerations for our audio-specific VR tool. To evaluate the usability of INVISO, we conduct two user studies: The first demonstrates that our visual interface effectively facilitates the generation of creative audio environments; the second demonstrates that both expert and non-expert users are able to use our software to accurately recreate complex 3D audio scenes.

SketchExpress: Remixing Animations for More Effective Crowd-Powered Prototyping of Interactive Interfaces by Sang Won Lee

Speaker Bio:

Sang Won Lee is a Ph.D. Candidate in Computer Science at the University of Michigan. His works lie at the intersection of music and computer science and he has focused on developing interactive systems that mediate musical collaboration and enable novel ways of artistic expression. Also, he has put efforts to bring the collaborative live nature of music-making to other domains and to create computational systems that facilitate real-time collaborative creation for various tasks: from crowdsourcing to programming. He holds a Diploma in Industrial Engineering from the Seoul National University and an M.S. degree in Music Technology from Georgia Tech. He is a musician, who performed numerous times in peer-reviewed venues including NIME, CHI-Art, and ICMC and received the International Computer Music Association Music Award 2016 with his composition, Live Writing: Gloomy Streets.

Abstract:

Low-fidelity prototyping at the early stages of user interface (UI) design can help designers and system builders quickly explore their ideas. However, interactive behaviors in such prototypes are often replaced by textual descriptions because it usually takes even professionals hours or days to create animated interactive elements due to the complexity of creating them. In this paper, we introduce SketchExpress, a crowd-powered prototyping tool that enables crowd workers to create reusable interactive behaviors easily and accurately. With the system, a requester—designers or end-users—describes aloud how an interface should behave and crowd workers make the sketched prototype interactive within minutes using a demonstrate-remix-replay approach. These behaviors are manually demonstrated, refined using remix functions, and then can be replayed later. The recorded behaviors persist for future reuse to help users communicate with the animated prototype. We conducted a study with crowd workers recruited from Mechanical Turk, which demonstrated that workers could create animations using SketchExpress in 2.9 minutes on average with 27% gain in the quality of animations compared to the baseline condition of manual demonstration.
Name: *
Email: *
School/Department/Unit *
I will attend this MISC meeting: *
Submit
Clear form
Never submit passwords through Google Forms.
This form was created inside of University of Michigan. Report Abuse