Archives for category: human-machine interface

 

One of many prototype space suits you and I can see, but not try on, at Kennedy Space Center

One of many prototype space suits you and I can see, but not try on, at Kennedy Space Center

First, go see Gravity in 3D IMAX. Why? Because Patriotism: the ISS is partly Canadian, the Canadarm and Dextre appear in the movie, and IMAX is a Canadian invention. Also, IMAX has been to the real ISS. 3D is almost  mandatory in a space movie, where there a fewer environment queues to indicate relative position, like say, the Earth 300 miles below you, or the peaks of the Himalayan mountains 1% closer.

The movie itself was made with the contribution of robotic arms to lend a complete sense of zero G choreography. Cameras have been controlled robotically for at least 40 years for special effect shots. Motion control cameras were developed for Star Wars (computer controlled) and 2001: A Space Odyssey (mechanical), since those films require repeated camera passes of the same spaceship models, allowing each passes’ film to be overlayed with the next, lining up things like lights, background, engine glow, perfectly. No human could match the repeatability. Check out this video about John Dykstra’s pioneering work on robotic camera control.

But robots have another capability other than endless perfect repetition – the motion can be planned, carefully, and in motion paths that would tax a camera person: simultaneous pan, tilt, zoom, dolly, rack, in smooth elegant arcs. In the world of CG, cameras have always had this freedom to move on gentle curves, maintaining flawless lock on a subject (in fact it takes quite a bit of talent to add back in the ‘human’ elements of a camera operator, as demonstrated so well in Battlestar Galactica.)

Putting those virtual camera moves into a real camera is something that Bot and Dolly handled in Gravity - their demo reel shows the alien smoothness and confidence of a robotic camera in action. Another demo reel by The Marmalade shows the new high-framerate possibilities of precise high speed motion control using their Spike robot camera system.

The technology is nothing without the illusion-creating setup. One robot can be fitted with the camera. Another can be fitted with, say, an actor/astronaut, or a keylight. Moving these two carefully can ‘null out’ any hint that a scene was filmed in gravity, since the gravity that we expect to be coming from one direction in a scene can now be coming from anywhere. In fact, full-motion flight simulators use the trick of “redirecting Gravity” to simulate the feel a pilot would have in a real aircraft. Like any good special effect, the trick is to fool the senses, not to simulate reality.

The big change in this technology is the real-time speed (i.e. fast!) at which the cameras can now be moved with high accuracy. The control software and motors have been refined so that the possibilities of motion are now beyond what a human operator can do. For example, here is an industrial pancake sorting robot (!), performing at about 3 pancakes per second, and another playing perfect pool.

Of course, you need a Director who is up to using these tools, and this was the case. Alfonso Cuaron has apparently wanted to be two things – an Astronaut and a Director, and being a director pays better. He is also known for his very long continuous shots with no cuts, and the opening scene, as one of the characters in the movie says, “breaks the record”. The cinematography really is the core of the movie, since the alien world of zero G, where momentum is dominant, is the Antagonist. (Check out a demo by ISS astronaut Mike Fossum on Angular Momentum before you see the movie!)

The difficulty is that the camera motion is now the star of the show, meaning the actors have to adapt to the shot, and the shot has to be pre-visualized and planned more carefully, as described here by the Director. Although the control system opens up new possibilities, organic control and intuitive direction of the tool become the next challenge. “Hey Robocam, orbit around Sandra B’s head as she squints into the sun setting behind the rim of the Earth”….no? You only understand 6-degree-of-freedom target points and motion splines? And you want a Union? Hmm.”

For a great Astronaut’s View on the movie, check out Mark Kelly’s write-up at the Washington Post.

Check out this interview with the Cinematographer, Emmanuel Lubezki, who discusses the virtual lighting challenges in the movie.

For the excellent in-depth view of the movie’s tech, check out CG Society’s article.

And if you think that the plot line is somewhat infeasible, this overview from ESA might convince you otherwise: spacejunk is a huge problem for astronauts, and satellites and manned missions do have to get out of the way periodically!

If you are interested in the unique editing and shot style of the movie, contrast it with the “traditional repetoire” of editing techniques to move from one shot to the next. This oldschool web site gives a great overview, with long duration shots, or “Plan Sequence”,  near the bottom.

Track spacejunk (and stuff that most certainly isn’t junk) using spacejunk for android, Night Sky for iOS or nyso’s site for desktop

Yay Canon Canada

EF-S 10-22mm with adapter versus EF-M 11-22mm

EF-S 10-22mm with adapter versus EF-M 11-22mm The red area shows an overlay of   equivalent size.

I have had the 11-22 for a week or so now from Henry’s here in Canada and took the chance to compare it to the 10-22 EF-S with the EF-M to EF adapter on the EOS-M. Up until now I’ve kept the 10-22 on the EOS-M for tourist and casual shooting as well as a few commercial shoots as a secondary camera.

Considering that a good micro four thirds  super wide costs $650-$800 and the EF-S 10-22mm is closer to $900, $400 is a bit of a steal.

Here are my findings in using it and comparing it to the EF-S 10-22mm

Read the rest of this entry »

As part of our work I get to do quite a bit of photography and videography. And an interesting shot is often one taken from a tight spot.

The EOS-M from Canon was a very handy camera – the size of the ‘heroic’ small video cameras with a APC-Sensor.

It helped that they were being sold at 40-50% off original price, because of a firmware issue that turned many people off of buying them.

As Warren Buffet might say – buy value, not perception. This little camera was indeed perceived as a slowpoke because of its comically slow autofocus, but having spent enough time writing algorithms for servo control, I knew that it was probably just a firmware update away from being faster. It turned out this was the case when  2.0.2 was released. Suddenly the camera was twice as fast at focusing, and perceptually felt like someone woke it up with BTTF3 Wake-Up juice? Amazing? Just engineering:

First, a camera with autofocus actually has to move parts of the lens using a motor, then check how blurry the image is, then repeat for hundreds of times in a second. It’s really a robot eyeball, so we’ll call it an “EyeBot”. Like all robots, its motion is determined by its internal control system – which really means the pathway from sensing to deciding to doing and back again.

Remember the old saying: “Fast, Cheap, or Good: Pick any two.” ?

For an Eyebot Control System, the three items are: “Fast, Power-efficient, or Accurate: pick any two-ish.”

Read the rest of this entry »

eulerianvideostill

When we think of enhancing vision, we tend to think in terms we can really get a grasp on:

  • Zoom in (enhance!)
  • night vision
  • microscopy (really zoom in)
  • long exposure (astronomy)
  • timelaspse (watch that glacier hussle)
  • slow motion (mythbusters finales)

My favourite development in computer vision during SIGGRAPH 2012 was called Eulerian Motion Enhancement of Video.

Essentially, it amplifies motion in video. You have to see it to really get a grasp on what it means, so here is the video. Check out the throbbing arm artery!

The secret here is to look at changes in the video from moment to moment, just like when you flash between two photos you took at a party and can see the differences easily. The algorithm tracks the differences between many frames over time noting the differences only.

But then the clever bit – what if you only pay attention to certain rates of change and ignore the rest? For example what if you had video of two pendulums swinging next to each other – one short and fast, the other long and slow – using Eulerian methods, you could ignore the fast motions or ignore the slow motions, much like a graphic equalizer in audio can isolate bass and treble frequencies in your music. You could effectively filter out either the slow or the fast pendulum depending on your ‘equalizer’ setting.

Once you’ve isolated the motions you want to enhance, you add the resulting difference image frame back into the video. If you really want to enhance it  you do this several times in a row using a feedback loop. The more times you feed back the difference, the more that specific motion in the video gets exaggerated compared to other motions in the video.

This works best for periodic motions, like a pulse or guitar string or heartbeat, but enhancing specific motions is pretty awesome. In fact, you do it all the time without really being aware.

How? Well, try waving your hand at the very edge of your vision holding while up 2 fingers. Can you notice the motion – YEP. But can you count the fingers? NOPE. The edges of your vision are tuned to detect motion changes, not detail, while the centre portion of your vision is tuned for detail. You’re not going to read a book out of the corner of your eye, but you’ll definitely notice the sabre toothed tiger coming at you.

Of course, you have to watch the video, still images don’t really catch the effect!

From Gavin Aung Than's ZenPencil's

From Gavin Aung Than’s ZenPencil’s

A big issue that faces anyone who has had to grow up is choosing a career. It is tied so much to our time, identity, our goals and what we want to mean.

Even those lucky enough (and by lucky we can mean any mix of hard work, opportunity, and will) still have to contend with the day-to-day reality of Living A Dream of a Certain Size.

Take Chris Hadfield – the mundane aspects of being an astronaut may dominate his time – training, studying, paperwork, details and more details; a dream job tied up in procedure manuals and velcro.  What people find most interesting about his being an astronaut is not just the experience itself, but what it took for him to get there.

So what is that? Of course there is studying, and of course there is the hard work, but what he has shared with the internet community during is the not-so-simple fact that you actually can change your future. Not just in principal, and not after a lot of preparation, but every time you make a decision, big or small.

This message, above the inspiration that science, research and curiosity, is the most human message and the one he is most qualified to share.

And of course a message like that would be lost on most of us if not put into cartoon format, so here is it, from Reddit and Gavin Aung Than

Follow

Get every new post delivered to your Inbox.