Displays in a vibrating environment: How NASA addressed a blurry problem

vibratingDisplay2

The effect of synchronized strobed display to reduce motion blurring readibility issues in a high vibration environment. Or more specifically, Turning a mess into less of a mess.

The stereotypes are true: the boost into space from sea level is a shaky, G-infested carnival ride with every Fourier component you care to name. NASA had a similar problem as part of its problematic Ares 1 project. Some rockets have a dominant resonance frequency in long axis that is termed ‘pogo’ (like the stick) and in human rated vehicles this means a dominant mode vibration passes to the passengers. In the case of the Ares I, this was on the order of 0.7G’s at about 12 Hertz, working out to around 5mm motion. If the computer displays the passenger is looking at do not have the same damping and resonance characteristics as their own eyes and head, motion blur in the displays will make them unreadable as simulated above.

A solution tested was to strobe the display in the same way LED-based displays are dimmed – a square wave duty cycle is applied so that the display is actually off some of time. The duty cycle is synchronized to the main vibrational component of the the pogo motion, removing the worst of the motion blur at the expense of some brightness (This simulated view assumes that brightness can be boosted somewhat to compensate).

 

Trading Brightness for clarity: effect of strobing a display in sync with a sinusoidal vibration mode

Trading Brightness for clarity: effect of strobing a display in sync with a sinusoidal vibration mode

When compensating for a single, sinusoidal mode, the loss in brightness is not that great if the duty cycle of the strobing is phase matched to a displacement peak of the motion as shown. A vibration reduction of 90% is possible with a 20% duty cycle, or 80% loss in brightness.

Read the article and see the demo video here:

http://gizmodo.com/5880850/how-nasa-solved-a-100-million-problem-for-five-bucks

UX Science – measurement gets the answer

The simplification of user interfaces has been proceeding quickly now that the last vestiges of skewmorphism are gone. Like any new technology, the first iterations of the interface must be familiar to the users. Early cars looked like carriages, early lightbulbs behaved like gaslight, early televisions looked like radios, and the first home computers worked like typerwriters (and still do!).

But any design trend ultimately overshoots the mark, in this case iconography has possibly become oversimlified, and buttons without outlines or contrast fill are being used because retina-class displays support the fine line widths.

Curt Arledge addresses one basic question in this user interface direction: does an outline or contrast button have more usability. Check out his results here

In summary, what seems to matter are two things. First, the users’ familiarity with the icon type: i.e. the common language all interfaces share to a great degree in the iconography alphabet. Second, that user testing is still required, since differences appear in counter-intuitive places, and some design decisions affect usability less than expected.

The Robotics Behind the Gravity Movie

 

One of many prototype space suits you and I can see, but not try on, at Kennedy Space Center

One of many prototype space suits you and I can see, but not try on, at Kennedy Space Center

First, go see Gravity in 3D IMAX. Why? Because Patriotism: the ISS is partly Canadian, the Canadarm and Dextre appear in the movie, and IMAX is a Canadian invention. Also, IMAX has been to the real ISS. 3D is almost  mandatory in a space movie, where there a fewer environment queues to indicate relative position, like say, the Earth 300 miles below you, or the peaks of the Himalayan mountains 1% closer.

The movie itself was made with the contribution of robotic arms to lend a complete sense of zero G choreography. Cameras have been controlled robotically for at least 40 years for special effect shots. Motion control cameras were developed for Star Wars (computer controlled) and 2001: A Space Odyssey (mechanical), since those films require repeated camera passes of the same spaceship models, allowing each passes’ film to be overlayed with the next, lining up things like lights, background, engine glow, perfectly. No human could match the repeatability. Check out this video about John Dykstra’s pioneering work on robotic camera control.

But robots have another capability other than endless perfect repetition – the motion can be planned, carefully, and in motion paths that would tax a camera person: simultaneous pan, tilt, zoom, dolly, rack, in smooth elegant arcs. In the world of CG, cameras have always had this freedom to move on gentle curves, maintaining flawless lock on a subject (in fact it takes quite a bit of talent to add back in the ‘human’ elements of a camera operator, as demonstrated so well in Battlestar Galactica.)

Putting those virtual camera moves into a real camera is something that Bot and Dolly handled in Gravity – their demo reel shows the alien smoothness and confidence of a robotic camera in action. Another demo reel by The Marmalade shows the new high-framerate possibilities of precise high speed motion control using their Spike robot camera system.

The technology is nothing without the illusion-creating setup. One robot can be fitted with the camera. Another can be fitted with, say, an actor/astronaut, or a keylight. Moving these two carefully can ‘null out’ any hint that a scene was filmed in gravity, since the gravity that we expect to be coming from one direction in a scene can now be coming from anywhere. In fact, full-motion flight simulators use the trick of “redirecting Gravity” to simulate the feel a pilot would have in a real aircraft. Like any good special effect, the trick is to fool the senses, not to simulate reality.

The big change in this technology is the real-time speed (i.e. fast!) at which the cameras can now be moved with high accuracy. The control software and motors have been refined so that the possibilities of motion are now beyond what a human operator can do. For example, here is an industrial pancake sorting robot (!), performing at about 3 pancakes per second, and another playing perfect pool.

Of course, you need a Director who is up to using these tools, and this was the case. Alfonso Cuaron has apparently wanted to be two things – an Astronaut and a Director, and being a director pays better. He is also known for his very long continuous shots with no cuts, and the opening scene, as one of the characters in the movie says, “breaks the record”. The cinematography really is the core of the movie, since the alien world of zero G, where momentum is dominant, is the Antagonist. (Check out a demo by ISS astronaut Mike Fossum on Angular Momentum before you see the movie!)

The difficulty is that the camera motion is now the star of the show, meaning the actors have to adapt to the shot, and the shot has to be pre-visualized and planned more carefully, as described here by the Director. Although the control system opens up new possibilities, organic control and intuitive direction of the tool become the next challenge. “Hey Robocam, orbit around Sandra B’s head as she squints into the sun setting behind the rim of the Earth”….no? You only understand 6-degree-of-freedom target points and motion splines? And you want a Union? Hmm.”

For a great Astronaut’s View on the movie, check out Mark Kelly’s write-up at the Washington Post.

Check out this interview with the Cinematographer, Emmanuel Lubezki, who discusses the virtual lighting challenges in the movie.

For the excellent in-depth view of the movie’s tech, check out CG Society’s article.

And if you think that the plot line is somewhat infeasible, this overview from ESA might convince you otherwise: spacejunk is a huge problem for astronauts, and satellites and manned missions do have to get out of the way periodically!

If you are interested in the unique editing and shot style of the movie, contrast it with the “traditional repetoire” of editing techniques to move from one shot to the next. This oldschool web site gives a great overview, with long duration shots, or “Plan Sequence”,  near the bottom.

Track spacejunk (and stuff that most certainly isn’t junk) using spacejunk for android, Night Sky for iOS or nyso’s site for desktop

Canon APS-C wide zooms – EF-M versus EF-S

Yay Canon Canada

EF-S 10-22mm with adapter versus EF-M 11-22mm

EF-S 10-22mm with adapter versus EF-M 11-22mm The red area shows an overlay of   equivalent size.

I have had the 11-22 for a week or so now from Henry’s here in Canada and took the chance to compare it to the 10-22 EF-S with the EF-M to EF adapter on the EOS-M. Up until now I’ve kept the 10-22 on the EOS-M for tourist and casual shooting as well as a few commercial shoots as a secondary camera.

Considering that a good micro four thirds  super wide costs $650-$800 and the EF-S 10-22mm is closer to $900, $400 is a bit of a steal.

Here are my findings in using it and comparing it to the EF-S 10-22mm

Continue reading