» ABB-YuMi Showstage
This site relies heavily on Javascript. You should enable it if you want the full experience. Learn more.

ABB-YuMi Showstage

credits cppstudios,colorsound

ABB - YuMi Show Stage At the 2015 Hanover Industry Trade Fair, visitors were captivated by the ABB interactive presentation introducing “YuMi”- the true and innovative collaboration of man and robot. The installation, displayed on and before the ABB stage, was shown on four, interconnected presentation surfaces:

ABB-YuMi Showstage

Visitors were offered a spectacular interplay in the form of a high quality, audio-visual composition, displayed on a monitoring architecture coordinated precisely with Marc Gassert’s moderation. YuMi actions before the eyes of the audience and boundless experimental stations, where visitors could spontaneously try out YuMi’s functions for themselves.

The basic technical parameters of the YuMi demo stage, developed by CPP Studios on behalf of ABB, are as follows:
27 High-definition displays controlled by a vvvv application, with over 8,000 pixel-wide video delivered by eight PCs running in parallel. In addition, 5 brand new YuMi robots controlled via a custom timeline-based application.

YuMi, "You and Me", was introduced here as “the first real collaborative robot”, which, with its two arms equipped with a total of 14 joints or handles, can efficiently work together with man. A sensory security system makes the need for any physical protective barriers redundant.

The collaboration was demonstrated “live” at the trade fair: Interaction between the moving robots and audience was barrier-free. It was also the norm for members of the audience to trigger the robots’ security mechanism and thus stop their movement. The robot control application, developed by CPP, enabled the robots to be reset via the software - without impairing the performance.

In his moderation, Marc Gassert was able to independently run the show by remote control and thus immediately respond to the reactions of the audience

All the robots’ movements are generated over an exo-skeleton and subsequently integrated within the show. The skeleton and the recording of data were newly developed, designed and manufactured from scratch.

colorsound, Thursday, Sep 10th 2015 Digg | Tweet | Delicious 0 comments  
  • 1

anonymous user login


~3d ago

nanotekt: a brief VL.stride study https://youtu.be/B2Nlz5VZ9lM ...and a massive 'thank you' to the devvvvs!

~4d ago

everyoneishappy: @overtones this would require 4D noise, which is possible but not implemented in Fieldtrip

~6d ago

overtones: Hi! SF2D has Time & DomainOffset inputs, but SF3D only DomainOffset. Is there a way to use Time in SF3D or it makes no sense? Thx

~9d ago

joreg: @SabrinaVerhage this could be useful for you next "hacking sex" workshop: https://buttplug.io they have a .NET nuget for use in vvvv!

~17d ago

ravazquez: Happy new year!

~17d ago

sunep: Happy new year everyone!

~21d ago

ggml: confirmed. thanks

~21d ago

david: @ggml fixed. please try again.

~21d ago

joreg: @ggml confirmed. we'll investigate..

~22d ago

ggml: node workshops on talque seem to be no longer available