Those of you who have kept up with some of Microsoft’s new toys (or who read my Twitter), have undoubtedly heard of a new little Seadragon based photo interface they have been working on in conjunction with the University of Washington called Photosynth. This new spatial photo organization system sent the tech word abuzz when news, video, and a tech demo began passing back and forth across sites like Digg anv Slashdot.
A few days ago on August 20th, Microsoft officially took the technology out of the “look but don’t touch” phase, and began enabling users to log in and create their own “synths.” This amounts to going out, taking a ton of pictures of something, and uploading them. There’s no other user intervention needed aside from naming the synth and tagging it. The system calculates groups, intersections, perspective, placement, etc. Great on the time saving, not so great if you want to adjust something. I noticed in my tests that a couple times it didn’t connect groups that clearly went together, and there’s no way to tell it otherwise yet.
My first instinct is that even though this is a little heavyweight on computing power needs, it’s an awesome idea for virtual tours of campus. I was recently considering doing a bunch of videos around the university – walking around, looking at things, and other tasks. The idea is to give potential students a better feel of “being there.” Instead, I’m considering throwing in some time to build out some of the key areas of campus in PhotoSynth (along with the videos as well). Some colleges are already playing with this. The reason I see this as a huge opportunity is that it moves out of the passive nature that video has, and creates an active environment they can explore with about as much detail as you make possible.
Furthermore, the ways they are working on improving this technology are simply awesome. It really gets me thinking about how interesting it is that 2D still images could really beat out video as an interactive tool on the internet. In reality, almost everything is more interactive than video, but video is viewed as being more dynamic since it has moving pictures. Just about the time Photosynth went live, this video came out detailing some of the advances they are already making for the photo tourism software.
I worked up four quick examples just to test run things, which I have linked just below. I found that with anything of any size you should really try to hit with at least fifty shots. I maxed my synths about 120 or so, but could see using way more on other things in the future. I also found that it works much better if the camera is not necessarily the center of the synth. Instead, try to make the camera be outside looking in. In the first example of the Russ Hall stairway, clicking the “Switch to the next 3D Group” button will show you how many different groups it put together, because it seemed to have some problems organizing things in an indoor 3D space. It also probably didn’t help that I was moving all over trying to cover things with different angles. Note that in that video I mentioned in the previous paragraph, it looks like they are working on this very issue though. Outside works much nicer though. Taking a target and moving around it seems to produce nicer results.
You can also see how things work from a single vantage point (JungleTron), and how the orbits come together (statue).
- Embeddable: You can embed the synths in existing web pages, and it has a nice splash prompt asking users to download the plugin if they haven’t (the splash even shows that specific synth’s thumbnail). U of W’s web site also shows an example using a Java applet, though that doesn’t appear to be a publicly available feature yet.
- Engaging: Rather than just asking a user to sit and watch something, or flip through a handful of disconnected images, you are encouraging someone to interact with your site and campus. Consider the potential for Easter Eggs.
- Easy (what’s with all these E words?): While the ease of use forces you to sacrifice control (see the other E word below), it makes it stupidly easy to deploy and use, which is a great time saver and makes the barrier to entry low.
- Single Image View: Users don’t have to remain trapped in the interactive world if they don’t want to, and can switch to a simple thumbnail gallery.
- Storage: Microsoft was generous in giving users 20GB of storage space for synths. That’s a lot. For comparison, my four synths are built from 364 1600×1200 images that took up .3GB (that’s point three, or a little over 300MB).
What it Needs
- A preview mode: This is especially true if you are shooting in hi-res of any kind. Prior to taking all the time to upload a couple hundred megs of photos, it’d be nice if you could make sure it looks right.
- Better indoor modeling: Right now Photosynth seems to have some trouble modeling from inside a structure looking out. Recent demonstrations indicate this will be improved soon.
- Manual stitching: If Photosynth defines a set of photos as a separate 3D group, it’d be nice if you could manually match it to another 3D group that it goes with if it made a mistake.
- More OS support: Few tears will be spilled over those of us crazy enough to use Linux, but the Mac crowd is big enough that Microsoft needs to get on the ball at the risk of Apple, et al doing it better to please their customers.
- Clean permalinks: Hopefully they will hurry up and do something about the links to individual collections. At the moment, they contain long, and impossible to verbally telegraph synth IDs. It would be nice to see something more like http://photosynth.net/username/synthname.
- Editing: Currently, you can’t add or remove photos from a completed synth, and you can’t edit existing photos in place. You also can’t download all the images used as a batch (that I know of).
Is it Ready?
Unfortunately, not just yet, at least not as a primary campus marketing tool. But, if you have the time and resources to start playing with it, I think there’s plenty of value in it, even at this early stage. Since the whole process (besides taking the photos) is automated, it doesn’t actually take much to slap a synth together and post it. Imagine groups like your art department and what they could show off with this kind of functionality.
I think the keys to success with this are all in the “What it Needs” section. If Microsoft can bull ahead through those issues, particularly making it OS and browser agnostic, I could see this quickly becoming the tour software de jour. I’ll also be interested in what sites like Flickr will do in response, be it license the softwre, or produce their own. There is always the “Microsoft” variable that can and will keep plenty of people from commiting to it. I don’t entirely blame them, as I am frequently against putting all your eggs in one third party basket. The only reason I differ on that opinion in this case is because, well, there is no alternative yet, and given what I’ve seen in that latest video, I’m not sure there’s much improvement even necessary.