Publication List Update

In the course of getting a job done, we all end up doing a bit of research. Here are some of the projects I’ve contributed to, from artificial intelligence to aircraft design, tissue simulation, human-machine interfaces and Lego Mindstorms! Feel free to check it out. Wherever possible, I’ve added the presentation versions, which are a bit more visual and a lot less text!

Advertisements

How to effectively study

Alan Turing Statue

Alan Turning : “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

You never stop learning, which means you never stop studying. Sometimes the hardest things to learn are those that don’t have a concrete test or exam at the end. How do you know how well you did in a race if there are no hurdles, laps, timer or finish line? That’s part of being an adult, and actually a part of a Human Factors model where your “comfort zone” must be stretched into an area where you are uncomfortable, but, as it turns out, competent.

A good way to stretch yourself in the direction of learning something new is not just to read the manual. Humans are designed to learn through doing, so doing examples and writing example exams is generally more effective than just linear reading.

Why?

 

Continue reading

What’s All This Bob Pease Stuff Anyhow?

Bob Pease in a Nutshell

Bob Pease in a Nutshell

If you’ve done any electronics hacking, designing, pondering, or frustrating, you may have googled your way to Bob Pease and one of his articles that follows the title format, “What’s all this XYZ Stuff, Anyways?” where XYZ is an arcane, important mystery of electrical engineering steeped in dogma, myth, and cloudy mystery. His M.O. is to blow all that fuzz away and leave clear understanding and core concepts. He’s one of those authors that make you feel smarter. Here are some of his articles from Electronic Design. They paywall many others, but they were nice enough to share some of the Best (by their reckoning) and the Control Systems ones are certainly of interest to me

What’s All This P-I-D Stuff, Anyhow?
What’s All This Double-Clutching Stuff, Anyhow?
What’s All This Negative Feedback Stuff, Anyhow?
What’s All This Spicey Stuff, Anyhow? (Part II)
Bob’s Mailbox: Audio Quality, A Crazy Rack, And The PE Exam
What’s All This Input Impedance Stuff, Anyhow?
What’s All This Current Limiter Stuff, Anyhow?

If you like watching him chat it up and seeing the demos, his videos by TI and others are all over youtube, which I am sure he could have replicated using 6 op amps and a pair of LEDs.

Book Review: Information Graphics published by Taschen

cover_ju_information_graphics_1206041454_id_479916

Taschen, 145,366 pages (approximately)

This is physically the largest book I own, and I managed to get it home from LA in my carry on baggage somehow. Probably the most thoughtful part of this book is the initial mini book that is slightly inset in the first 96 pages; It is a historical view of information graphics, from their origin through the 20th century.

Taschen book: oil rigThe book is an example of its subject, being information and graphics. It is inspirational, in that it can be opened randomly for a new hint into visualizing numbers and making meaningful emphasis out of large data sets. But it can also be used systematically to identify the type of data that must be presented and solutions for its presentation: the sections are divided into graphics which best show data based on Location, Time, Hierarchy, and Category.

Of course, this is a book, and so animated, or interactive infographics can’t be easily shown, however I’ve found that if you can’t get your meaning across in a static infographic, at least as a storyboard or infographic, you are not going to benefit a user by making them play with your interactive interface.

pioneer-10-record

There is a reason that the pioneer and voyager space probes have infographics written on them, as their first means of communication with any alien intelligence that may find them, and this is certainly a universal language which it is worth being literate in. As with any language, it is harder to create simplicity that it first seems, and this book is a great support to lean up against in the first few minutes of planning your next bar chart or 9 dimensional genetic map of Canadian immigration patterns. Continue reading

Oh yeah, the Arduino

$T2eC16d,!)kE9s4,BL5tBRnWzK3EZg~~60_35The California Science Center is now home to the Space Shuttle Endeavour, and while I get together my article on the exhibit, I wanted to share a bit of fun that was had with the Metal Earth laser cut “metalgami” model I picked up there:

These little models are precision made and kind of make you wish you were precision made to the same level. This is somewhere between origami and “Slot A into Tab B” kind of work and the fun is definitely in the assembly.

Still, it is a little bit of genius to see how each 3D form comes from a piece of a common metal sheet, and all you really need is a pair of tweezers and maybe a few wooden dowels and a straightedge to get some of the curves right.

Is it for Kids? Certain kinds of kids – the ones that work with Lego Technic or Kinex or, of course, origami. Is it for Engineers who grew up with the Shuttle? Heck yes.

Arduino for time-lapse

The time-lapse video was made with an arduino board that triggered the infrared remote receiver of an EOS-M every 2 seconds or so. Extra parts were a 9V battery (in the 5V board power input) and a standard IR LED. The EOS-M is a cheap way to get a good APS-C -sized sensor in a tiny package, but has no straightforward way to remote trigger via a cable.

 

The power of algorithms: driver training for your camera

As part of our work I get to do quite a bit of photography and videography. And an interesting shot is often one taken from a tight spot.

The EOS-M from Canon was a very handy camera – the size of the ‘heroic’ small video cameras with a APC-Sensor.

It helped that they were being sold at 40-50% off original price, because of a firmware issue that turned many people off of buying them.

As Warren Buffet might say – buy value, not perception. This little camera was indeed perceived as a slowpoke because of its comically slow autofocus, but having spent enough time writing algorithms for servo control, I knew that it was probably just a firmware update away from being faster. It turned out this was the case when  2.0.2 was released. Suddenly the camera was twice as fast at focusing, and perceptually felt like someone woke it up with BTTF3 Wake-Up juice? Amazing? Just engineering:

First, a camera with autofocus actually has to move parts of the lens using a motor, then check how blurry the image is, then repeat for hundreds of times in a second. It’s really a robot eyeball, so we’ll call it an “EyeBot”. Like all robots, its motion is determined by its internal control system – which really means the pathway from sensing to deciding to doing and back again.

Remember the old saying: “Fast, Cheap, or Good: Pick any two.” ?

For an Eyebot Control System, the three items are: “Fast, Power-efficient, or Accurate: pick any two-ish.”

Continue reading

Motion Amplification in video – seeing the invisible

eulerianvideostill

When we think of enhancing vision, we tend to think in terms we can really get a grasp on:

  • Zoom in (enhance!)
  • night vision
  • microscopy (really zoom in)
  • long exposure (astronomy)
  • timelaspse (watch that glacier hussle)
  • slow motion (mythbusters finales)

My favourite development in computer vision during SIGGRAPH 2012 was called Eulerian Motion Enhancement of Video.

Essentially, it amplifies motion in video. You have to see it to really get a grasp on what it means, so here is the video. Check out the throbbing arm artery!

The secret here is to look at changes in the video from moment to moment, just like when you flash between two photos you took at a party and can see the differences easily. The algorithm tracks the differences between many frames over time noting the differences only.

But then the clever bit – what if you only pay attention to certain rates of change and ignore the rest? For example what if you had video of two pendulums swinging next to each other – one short and fast, the other long and slow – using Eulerian methods, you could ignore the fast motions or ignore the slow motions, much like a graphic equalizer in audio can isolate bass and treble frequencies in your music. You could effectively filter out either the slow or the fast pendulum depending on your ‘equalizer’ setting.

Once you’ve isolated the motions you want to enhance, you add the resulting difference image frame back into the video. If you really want to enhance it  you do this several times in a row using a feedback loop. The more times you feed back the difference, the more that specific motion in the video gets exaggerated compared to other motions in the video.

This works best for periodic motions, like a pulse or guitar string or heartbeat, but enhancing specific motions is pretty awesome. In fact, you do it all the time without really being aware.

How? Well, try waving your hand at the very edge of your vision holding while up 2 fingers. Can you notice the motion – YEP. But can you count the fingers? NOPE. The edges of your vision are tuned to detect motion changes, not detail, while the centre portion of your vision is tuned for detail. You’re not going to read a book out of the corner of your eye, but you’ll definitely notice the sabre toothed tiger coming at you.

Of course, you have to watch the video, still images don’t really catch the effect!