UX Science – measurement gets the answer

The simplification of user interfaces has been proceeding quickly now that the last vestiges of skewmorphism are gone. Like any new technology, the first iterations of the interface must be familiar to the users. Early cars looked like carriages, early lightbulbs behaved like gaslight, early televisions looked like radios, and the first home computers worked like typerwriters (and still do!).

But any design trend ultimately overshoots the mark, in this case iconography has possibly become oversimlified, and buttons without outlines or contrast fill are being used because retina-class displays support the fine line widths.

Curt Arledge addresses one basic question in this user interface direction: does an outline or contrast button have more usability. Check out his results here

In summary, what seems to matter are two things. First, the users’ familiarity with the icon type: i.e. the common language all interfaces share to a great degree in the iconography alphabet. Second, that user testing is still required, since differences appear in counter-intuitive places, and some design decisions affect usability less than expected.

The Robotics Behind the Gravity Movie

 

One of many prototype space suits you and I can see, but not try on, at Kennedy Space Center

One of many prototype space suits you and I can see, but not try on, at Kennedy Space Center

First, go see Gravity in 3D IMAX. Why? Because Patriotism: the ISS is partly Canadian, the Canadarm and Dextre appear in the movie, and IMAX is a Canadian invention. Also, IMAX has been to the real ISS. 3D is almost  mandatory in a space movie, where there a fewer environment queues to indicate relative position, like say, the Earth 300 miles below you, or the peaks of the Himalayan mountains 1% closer.

The movie itself was made with the contribution of robotic arms to lend a complete sense of zero G choreography. Cameras have been controlled robotically for at least 40 years for special effect shots. Motion control cameras were developed for Star Wars (computer controlled) and 2001: A Space Odyssey (mechanical), since those films require repeated camera passes of the same spaceship models, allowing each passes’ film to be overlayed with the next, lining up things like lights, background, engine glow, perfectly. No human could match the repeatability. Check out this video about John Dykstra’s pioneering work on robotic camera control.

But robots have another capability other than endless perfect repetition – the motion can be planned, carefully, and in motion paths that would tax a camera person: simultaneous pan, tilt, zoom, dolly, rack, in smooth elegant arcs. In the world of CG, cameras have always had this freedom to move on gentle curves, maintaining flawless lock on a subject (in fact it takes quite a bit of talent to add back in the ‘human’ elements of a camera operator, as demonstrated so well in Battlestar Galactica.)

Putting those virtual camera moves into a real camera is something that Bot and Dolly handled in Gravity - their demo reel shows the alien smoothness and confidence of a robotic camera in action. Another demo reel by The Marmalade shows the new high-framerate possibilities of precise high speed motion control using their Spike robot camera system.

The technology is nothing without the illusion-creating setup. One robot can be fitted with the camera. Another can be fitted with, say, an actor/astronaut, or a keylight. Moving these two carefully can ‘null out’ any hint that a scene was filmed in gravity, since the gravity that we expect to be coming from one direction in a scene can now be coming from anywhere. In fact, full-motion flight simulators use the trick of “redirecting Gravity” to simulate the feel a pilot would have in a real aircraft. Like any good special effect, the trick is to fool the senses, not to simulate reality.

The big change in this technology is the real-time speed (i.e. fast!) at which the cameras can now be moved with high accuracy. The control software and motors have been refined so that the possibilities of motion are now beyond what a human operator can do. For example, here is an industrial pancake sorting robot (!), performing at about 3 pancakes per second, and another playing perfect pool.

Of course, you need a Director who is up to using these tools, and this was the case. Alfonso Cuaron has apparently wanted to be two things – an Astronaut and a Director, and being a director pays better. He is also known for his very long continuous shots with no cuts, and the opening scene, as one of the characters in the movie says, “breaks the record”. The cinematography really is the core of the movie, since the alien world of zero G, where momentum is dominant, is the Antagonist. (Check out a demo by ISS astronaut Mike Fossum on Angular Momentum before you see the movie!)

The difficulty is that the camera motion is now the star of the show, meaning the actors have to adapt to the shot, and the shot has to be pre-visualized and planned more carefully, as described here by the Director. Although the control system opens up new possibilities, organic control and intuitive direction of the tool become the next challenge. “Hey Robocam, orbit around Sandra B’s head as she squints into the sun setting behind the rim of the Earth”….no? You only understand 6-degree-of-freedom target points and motion splines? And you want a Union? Hmm.”

For a great Astronaut’s View on the movie, check out Mark Kelly’s write-up at the Washington Post.

Check out this interview with the Cinematographer, Emmanuel Lubezki, who discusses the virtual lighting challenges in the movie.

For the excellent in-depth view of the movie’s tech, check out CG Society’s article.

And if you think that the plot line is somewhat infeasible, this overview from ESA might convince you otherwise: spacejunk is a huge problem for astronauts, and satellites and manned missions do have to get out of the way periodically!

If you are interested in the unique editing and shot style of the movie, contrast it with the “traditional repetoire” of editing techniques to move from one shot to the next. This oldschool web site gives a great overview, with long duration shots, or “Plan Sequence”,  near the bottom.

Track spacejunk (and stuff that most certainly isn’t junk) using spacejunk for android, Night Sky for iOS or nyso’s site for desktop

Canon APS-C wide zooms – EF-M versus EF-S

Yay Canon Canada

EF-S 10-22mm with adapter versus EF-M 11-22mm

EF-S 10-22mm with adapter versus EF-M 11-22mm The red area shows an overlay of   equivalent size.

I have had the 11-22 for a week or so now from Henry’s here in Canada and took the chance to compare it to the 10-22 EF-S with the EF-M to EF adapter on the EOS-M. Up until now I’ve kept the 10-22 on the EOS-M for tourist and casual shooting as well as a few commercial shoots as a secondary camera.

Considering that a good micro four thirds  super wide costs $650-$800 and the EF-S 10-22mm is closer to $900, $400 is a bit of a steal.

Here are my findings in using it and comparing it to the EF-S 10-22mm

Continue reading

The power of algorithms: driver training for your camera

As part of our work I get to do quite a bit of photography and videography. And an interesting shot is often one taken from a tight spot.

The EOS-M from Canon was a very handy camera – the size of the ‘heroic’ small video cameras with a APC-Sensor.

It helped that they were being sold at 40-50% off original price, because of a firmware issue that turned many people off of buying them.

As Warren Buffet might say – buy value, not perception. This little camera was indeed perceived as a slowpoke because of its comically slow autofocus, but having spent enough time writing algorithms for servo control, I knew that it was probably just a firmware update away from being faster. It turned out this was the case when  2.0.2 was released. Suddenly the camera was twice as fast at focusing, and perceptually felt like someone woke it up with BTTF3 Wake-Up juice? Amazing? Just engineering:

First, a camera with autofocus actually has to move parts of the lens using a motor, then check how blurry the image is, then repeat for hundreds of times in a second. It’s really a robot eyeball, so we’ll call it an “EyeBot”. Like all robots, its motion is determined by its internal control system – which really means the pathway from sensing to deciding to doing and back again.

Remember the old saying: “Fast, Cheap, or Good: Pick any two.” ?

For an Eyebot Control System, the three items are: “Fast, Power-efficient, or Accurate: pick any two-ish.”

Continue reading