3D UX and Multimodal Interactions for Samsung
Our team of H/W and S/W scientists along with visual and interaction designers were in charge of exploring interactive environments using all 3D display technologies Samsung had in combination with computer vision and voice enabled sensors. This problem statement gave birth to 100s of design possibilities. We used auto-stereoscopic LFD with depth sensors for retail applications, active stereoscopic displays (3D Smart TVs) for home entertainment navigation and got 3D display smart phones where users could play with 3D content floating over the mobile screen. I completed several projects with this team over a period of 1.2 years creating prototypes, softwares, gesture interface guidelines and intellectual property. I also wrote research papers for internal publications on several 3DUI topics.
touchless hand gesture for 3D Smart phones
I did technology first design exploration on how to design user interactions using a front facing camera as visual sensor on a 3D display phone. After analyzing some technological constraints and some initial user behavior we categorized several interactions in various categories. We did some basic hand form, hand motion and hand to phone relationship studies that helped us narrow down some key use cases where we found it beneficial for users to engage in touchless interactions. Subsequently we mapped touchless hand gestures and appropriate visual representations that provided logical user feedback. I worked with our computer vision scientists in our Kiev Research Center to develop a library of gestures and an SDK for android applications on Samsung devices. Our gesture kit was able to detect many hand configurations in combinations with 6 degrees of motion that could enable intricate user interactions and capabilities to engage with 3D content on top mobile display.
3D TV UI + Hand Gesture Interface framework
Our team's design brief entailed creating user interfaces, rules of interaction and user scenarios best demonstrating how 3D sensing and large 3D displays can work in tandem. We wanted to create immersive environments where users could feel that they are interacting directly with 3D content floating in space. We studied ergonomic stance, defined range of interaction, comfort of interaction, effort of interaction and various other parameter to create applications demonstrating strengths of having this interaction model. We created several working prototypes refining user interface, gesture recognition and demo scenarios for investors, executives and technology shows around the world.
3D UI Authoring Tool
Business Case: One of our research sponsors asked us create a 3D-authoring tool to be sold as a bundled software to our B2B customers. They wanted customers to be able to make user interfaces using Samsung products easily and leverage its capabilities.
User Needs: Business customers wanted capability and flexibility over hardware they bough from Samsung. They wanted a user friendly 3D authoring tool that they could give to their internal design teams to create design for applications and interfaces. They wanted to easily update 3D interfaces with time and control aspects for their customer needs.
Design: We started researching tools and softwares designers used to create 3D, 2D, motion and interactive content and leveraged mental models pre-established across these tools. We made paper prototypes and invited designers to test and play with our initial design and give us feedback. We gathered user requirements and capabilities for a MVP and iterated in agile sprints to develop a working prototype over a period of 1 year with development teams in Ukraine and design teams in South Korea. We finally completed the product over a period of 12 months and delivered the solution to our business sponsor to include in their delivery package.
2D to 3D Video Content Creation Tool
Business Case: Large format display (LFD) business unit were having difficulty selling 3D LFD to their clients since clients were saying it is too expensive to make 3D content, 3D content is not readily available and all existing video material is 2D. They commissioned our team to come up with a tool that convert 2D content to 3D content.
Use Case: Large retailers, and video wall owners wanted the capability to convert their existing video content to a 3D format so they can consider buying auto-stereoscopic 3D LFDs for their next investment. The availability of 3D content and quality of 2D to 3D conversion would highly influence purchasing decision of this technology.
Design: Working with a team of scientists from Samsung Research India we were able to find technology called Depth Image Based Rendering (DIBR) that can convert 2D content to 3D content. For our MVP we chose flat animation content as target video content for 3D conversion. We used an automated and minimal interaction design principle to convert a 2D video in a 3D video with 2-3 simple clicks. User could simply load the video, easily look at the video frames, manually identify some key-frames with clear object boundaries to process depth analysis and apply depth map to frames with recurring elements throughout the video. Users could also examine the quality of depth map for individual frames and re-apply depth detection for that frame or a range as needed to simply export it in the desired 3D format to display on our 3D LFD.
3D Display Mobile Home UI Concepts
A small part of my 3D UX team comprising of an industrial designer, computer scientist and I (UX designer), were asked to quickly design a mobile home experience for 3D display smart phone. In a 2 week design exercise we created some quick explorations to configure home screen contents, widgets, apps and notification in 3D layouts. Our designs explored perspective spatial layouts, layered content layouts and 3D representation styles like a stack of photos or transitions like a poker player shuffling cards etc. Our concepts were shown in executive review to secure future project budget and explain the potential of 3D phones to become a viable product in the future.