Saturday, August 23, 2014

Recruiting Senior Software Developers

There is a lot of demand for the limited supply of senior software developers in the world. My colleagues and I get several messages a week from recruiters, and many of them are laughably poor attempts to attract our interest.

How can they believe that we would leave our wonderful jobs because of health insurance, or an office "fully stocked with snacks"? I'm not sure I know a single developer without health insurance, and so many startups offer snacks that snack delivery is a booming industry!

What about the recruiters who do manage to convince a developer to write back? They often forget to build a relationship and keep the candidate's interest. Instead they lose sight of the person they're trying to push through "The Process".

How can this be? It doesn't make sense to compete for limited resources in such a half-hearted way. Those guys with the big beards wouldn't start an Alaskan gold mine with plastic beach shovels and a kitchen colander! Recruiting is an expensive activity, especially when the rest of the team should be focused on building a product. There is more to hiring good developers than sending LinkedIn form letters, scheduling interviews, and writing job requirements.

I wrote an essay about this for the Business of Software Blog: "Your Minimum Viable Recruiting Process". Give it a read if your company is trying to recruit developers, or engineers of any type. You'll learn what it's like to slog through the industry-standard recruiting process. You'll see how your shrewdest competitors are snagging the top talent.

If you enjoy the read, consider signing up below. I'll send an occasional entertaining email on how to improve your recruiting process. I will occasionally send you early access to drafts of my essays on the topic.

Tuesday, May 27, 2014

More App Store Ad Experiments: Platform Targeting

Note: This essay continues from questions raised by the previous essay on A/B testing in the App Store

In my last essay, I explored some ideas on how to improve results in the App Store by experimenting with ads. I ran Facebook Ads to A/B Test the Click Through Rates (CTR**) of two different images.

Based on the differing CTRs, I jumped to the conclusion that the composition of the images made the difference. In one ad, an entire iPad was shown. In the other ad, most of an iPhone was cut off, except for the screen. In other words, I believed that the way I depicted the iPad or iPhone resulted in more clicks. I didn't doubt that I was one step closer to buying a tropical island.

Manton Reece read my article and immediately wondered if maybe the iPad vs. iPhone split could account for the difference. In other words, maybe folks were more likely to click on an ad that depicted the device they were holding in their hands. I immediately slapped my forehead. My app island slipped below the waves. I vowed to figure out what really happened with my last experiment.

TLDR: Manton was correct. Also, implementing the changes suggested by Manton’s hypothesis improved my CTRs quite a bit.

A New Experiment

To start examining the Manton Hypothesis, I first tried sifting through the ad data Facebook provides. Perhaps I could see which clicks belonged to iPads and which belonged to iPhones or iPod Touches*. Unfortunately I couldn't find that information. It didn't seem like I could examine Manton’s theory with the ad data I already collected.

No worries, I can find out with another experiment! The Facebook Mobile App Ads for Installs (!) allows the each ad to target a different device. Keeping the other parameters the same, I set the underperforming ad (the one with a photo of an iPad) to only target iPads. Even one day into the experiment, it seemed like Manton was correct. The CTR for the iPad photo jumped up nicely.

After a week and a half of targeting the iPad ad to the iPad, the CTR rose from 1.5% to 3.08%. Double!

I was so impressed, I decided to try the same thing for the ad with the cropped iPhone. I targeted it to only display on iPhones or iPod Touches.

This time I didn't expect to see a huge jump in performance. Why? Remember, in the previous article, the iPhone ad was already out-performing the iPad ad. If the Manton hypothesis was the only explanation for the differences in CTR, that implies that there were a lot more iPhone or iPod Touch users getting my ads. The diagram below shows how the ad with the iPhone Photo gets a higher CTR when both ads get the same mix of iPhone and iPad users.

As the diagram shows, the iPad photo (right side) gets a lower CTR because the viewers as a whole are dominated by iPhone users. According to the Manton Hypothesis, iPhone users aren't as likely to click on a photo of an iPad. That’s why the pie is smaller on the right, and why that pie is mostly iPad clicks.

But the iPhone photo has the reverse situation: the iPhone users still dominate, but this time they are seeing a iPhone photo. So in this case the majority of the users see a photo which matches the device in their hands -- a favorable situation. This leads to the bigger pie on the left. This time the less interested party is the iPad users. They are the smaller percentage of the folks seeing the ad. Their diminished CTR has a smaller impact on the aggregate CTR.

So, what happens when you target the iPhone ad at only the iPhone users? Just as the Manton Hypothesis predicts, the CTR improved, and the impact of targeting the iPhone ad to only iPhone and iPod Touch users wasn’t as large. The CTR for the iPhone photo the week before the change was 2.686%, and was 3.128% after the change.

Ad Description Combined CTR CTR Targeting only the pictured device
iPad Photo1.5%3.08%
iPhone Photo2.686%3.128%


But here is an important point: the lifetime Combined CTR when I was indiscriminately showing an iPhone photo to both iPhones and iPads was 2.931%. The baseline I picked for the numbers above were for the week before my change. The improvement looks pretty small when compared to the larger baseline. 2.931% vs. 3.128% doesn't seem as exciting as 2.686% vs. 3.128%.

Why was the week-prior CTR lower than the all-time CTR? I don't know for sure. The numbers I’m dealing with here are small. I’m not spending hundreds of dollars a day on ads, and I’m not getting a huge number of installs. I don't have a $50,000 advertising budget. These tests are being done for $5 a day.

My experiments here come from 1,000’s of clicks, not tens of thousands or millions. The sample size may not be large enough. What looks like an insight might just be noise. So take these results with a grain of salt. Or, even better, run your own experiments using your own money.

If you have a small budget like this project, the best cure for uncertainty is to hedge your bets and keep re-checking your assumptions. If you have real money riding on an outcome, it makes sense to double check your work. At the very least, be prepared to revert your changes!

Making a Model

I did have another idea for examining our results. What if we could mathematically model our hypothesis and see how it fits the data. I would feel more confident of the hypothesis if I could make a model that makes a decent prediction about a different data set.

One cool thing about these Ads is that we can see the size of the potential audience for each ad. The audience for the iPad Ad is 182,000 people. The audience for the iPhone Ad is 620,000 people. The only difference between these audiences is that one targets the iPad and the other the iPhone / iPod Touch.

So, lets make lots of assumptions about the size of the audience and the probability of getting a iPad versus a iPhone user. Lets assume that if we don't target the iPad or iPhone specifically, the probability the ad will be shown on either device is proportional to the size of the audience. For instance:

Probability(iPad) = 182,000 / (182,000 + 620,000) = 23%
Probability(iPhone) = 620,000 / (182,000 + 620,000) = 77%

So, now we can make some assumptions and create a model for the iPhone ad targeting both iPads and iPhones:

77% * sameDeviceCTR + 23% * differentDeviceCTR = combinedCTR

Or visually:

Now we can do algebra:

23% * differentDeviceCTR = combinedCTR - (77% * sameDeviceCTR)

differentDeviceCTR =  (combinedCTR - (77% * sameDeviceCTR) ) / 23%

now assume combinedCTR = 2.686% (the iPhone image CTR when targeting both devices) and sameDeviceCTR = 3.128% (the iPhone image CTR after targeting only iPhones)

then differentDeviceCTR = 1.2%

Now lets take the differentDeviceCTR we just calculated from the iPhone ad and see if it predicts the outcome for the iPad Photo ad in the same situation: targeting both iPhones and iPads.

In this case, the equation looks a little different because we're flipping sameDevice (now iPad, because we're considering the iPad image) and differentDevice (now iPhone):

77% * differentDeviceCTR + 23% * sameDeviceCTR = combinedCTR

Now we plug in the same numbers from before:

77%* 1.2%  + 23% * 3.128% = combinedCTR

combinedCTR = 1.64%

This model doesn't seem too horrible! It predicted 1.64% CTR for the iPad photo ad when targeted against both iPad and iPhone. The reality was 1.5%. I'm pleasantly surprised. Again, the Manton theory seems quite reasonable. I'll leave it to you to see what happens if we use the other iPhone photo combinedCTR of 2.931% baseline -- the model doesn't agree as well.


So what do we conclude from this exercise? For one, targeting your ads specifically to the iPad or iPhone user could be worth your time. I doubled the CTR of my under-performing ad with two clicks.

Even more importantly, running experiment on your ads can really pay off. And the steps aren't difficult: form a hypothesis, run an experiment using ads, collect the data, and make a simple model. With a model, you can try to predict the impact of your change.

With this first bit of new knowledge, maybe I'll save tons of money on ad spend. And then maybe I can apply what I learned to other areas of the sales funnel. And then I can try another experiment, learn, and implement. Maybe my tropical app island isn't entirely out of reach.

I’m really glad that Manton commented on my last article. Thanks! An extra set of eyes is invaluable!
*No, it’s not called the iTouch! Also, be aware that Facebook lumps together the iPhone with the iPod Touch. For certain questions that could be important.

** Yes, technically it should be TTR, not CTR, since you tap on an iOS device.

Monday, April 28, 2014

A/B Testing for the iOS App Store

Update May 1, 2014: Manton Reece had a great question about my results. Scroll to the end for details.

As someone with a career attached to the iPhone App Store, I sometimes feel jealous of the folks who sell things on the web. Websites can use analytics and split testing to learn lots of things about how to make their product more profitable.

In the App store, you feel lucky to see your sales numbers a day after they happen. If Apple tracks how customers found my app’s page, the number of visitors, or how much time they spent, they don't share it with me.

This shortage of information has interesting consequences. First, there are lots of tools out there that try to help developers figure out what's going on in the app store. Tools like SensorTower, App Annie, Flurry, and AppCase.

Second, you'll find lots of lore on how to boost rankings, get listed higher in searches for keywords, and how to get featured by Apple.

Finally, there are tricks we use to get better reviews in the store.

These techniques are nice, but they aren’t proactive or customer focused. Even the best of these tools tell you almost nothing about the organic traffic coming to your App Store listing. It isn’t even clear what impact an improved ranking in the app store has on conversions.

Am I supposed to take it on faith that efforts spent on improving my rankings will be repaid by increased sales? I want tools that help me build a product that customers want to pay for. I also want tools to make it easy for customers to find my product.

Rob Walling's book Start Small Stay Small advocates that a developer-entrepreneur worry about developing the product last. Finding a market for a product, and figuring out how to reach it are the first priorities. Once the entrepreneur has found a market, she can tailor the product to fit.

Can mobile app developers can do something similar? Can we learn about the market for our apps in spite of the opaque App Store, or are we doomed to just make apps and fight our way up the charts?

I don't think we're doomed. I've decided to stop treating the App Store like the mouth of Moving Average's customer acquisition funnel. Instead, I've been experimenting with Facebook's Mobile App Ads for Installs.

These ads are cool because you can target an audience based on the things users have expressed an interest in on Facebook. Not only that, but you can tap into some interesting data about who is clicking on your ads. Note that I’m not saying Facebook is the only solution. I hear that Twitter has some similar tools. I just haven’t had a chance to play with them yet.

My Facebook ads are now the mouth of the customer acquisition funnel. Each ad has a small amount of copy and a 1200 x 627 image. To make my first ad, Facebook requested two different images. Out of the gate, I would have an immediate split test! Cool. And now my customer acquisition funnel looks like the below image.

Note that I still have organic traffic that might throw off my understanding of who is getting through the checkout stage. Since this is an app that is relatively new to the App Store, and it doesn’t rank very high in any search or rankings, I can make some assumptions. The real benefit here is trying to understand why folks are clicking through the ads. I’ll be able to get a better handle on this when I install the Facebook library in the app — it is supposed to let you attribute installs to their ad campaigns.

To make my A/B test, I decided that one image would feature an iPad showing my app and one image would feature an iPhone.

I opened one of my existing iPad screenshots and realized that Facebook was asking for a strange image resolution. I spent some time experimenting on how to resize the asset from the app store to fit the Facebook ad. By the time I got the iPad mockup looking OK and then uploaded it, I was feeling impatient. Below is the image for the first ad.

Like I said, I was feeling impatient. I opened my iPhone mockup and haphazardly cropped off parts of the top and bottom of the phone so the image fit the required dimensions. Not my best work, but I figured I could always replace it. I uploaded the image and launched my ad campaign. You can see the haphazard image for the second ad below.

When I looked at my campaign the next day, which ad do you think was doing better? The second ad with my haphazard, off-the-screen iPhone! After seven days, the ad with the iPad image had a 1.6% click-through-rate while the iPhone had a 3.7% CTR. That’s a pretty big difference that held fairly constant.

With my next app update, I'll replace my first App Store screenshot image with something more like the winning Facebook ad. My hypothesis is that the continuity from the ad to the store listing will help sales. I'm also hoping that the organic App Store traffic will feel attracted to the image as well.

Let me know if you find this sort of post interesting and I'll try to write more about the business of selling a product in the App Store.

Update: Manton Reece of Core Intuition fame had a great question after reading my article, which you can read on Basically, he asks if the difference in ad performance could have resulted from the differences in iPad versus iPhone impressions. In my words, the hypothesis is: "Users are more likely to click on an ad that features the same device they view the ad on." One of the assumptions behind that hypothesis is that more of my ads are getting viewed on iPhones rather than iPads.

Unfortunately, I was unable to find a way to report the iPad vs iPhone split from the past impressions. Fortunately, the ads do allow targeting along that split. To test the hypothesis, I'm going to change the iPad ad to only target iPads. If the hypothesis is correct, I would expect the CTR to increase for that ad.

If that seems to work, I will also test targeting the ad with the iPhone against only iPhone users. The hypothesis would predict that ad would also get a higher CTR targeting only iPhones. The effect might not be as strong because, again, I assume more Facebook users view Facebook on the iPhone. Still, wouldn't it be wild if I could target each specific color of the iPhone 5c?

If Manton's hypothesis is correct, I've still made the correct decision for the app. I've already submitted an update where I replaced the first screenshot with an image that looks like the second ad, but matches the target device. iPad users on the app store will see an iPad with the top and bottom cut off. iPhone users will see the same iPhone as in the second ad.

Thanks Manton, I'll have a good laugh at myself if the image composition ultimately has nothing to do with the CTR! Even if the composition does have an effect, it's a great experiment. And I'm reminded again that getting third-party opinions and doing things in public is a good idea.
*Moving Average Inc. is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Buying items through this link helps sustain my outrageous camera addiction and is much appreciated!

Wednesday, November 20, 2013

Google Glass Q & A

Note: updated 19 December 2013 to reflect changes with XE12: you can now wink to take photos.

Q: Are those Google Glasses?
A: For some reason, the plural sounds so much less cool. Unless you're making fun of me, it’s called Glass.

Q: Are those Google Goggles?
A: Goggles are a different Google product.

Q: What do you see right now?
A: Nothing. The display is usually off unless you're interacting with the device. The big activities that keep the display on are walking navigation and recording a video.

Q: Facial recognition blah blah blah?
A: Nope. Technically you could create a facial recognition app, but it certainly doesn't ship with one. I'm not aware of an app for it either. I’m guessing facial recognition apps would eat some serious battery power or bandwidth.

Q: Can the NSA see me?
A: Yes, it seems like the NSA is spying on a lot of people in the world, but probably not through Glass. I don't think I would have much battery life, storage, or data plan left if I were streaming audio or video to the NSA. I’m not sure how easy it is for the NSA to spy on the audio or video I intentionally record. Google is currently working to make NSA spying more difficult.

Q: Can you see through my clothes?
A: No I can’t. Not unless you're wearing a cling-wrap kilt.

Q: What does it look like?
A: When the display is on, it feels a lot like looking at the rear view mirror in a car. It’s not blocking much of your vision, and you glance up to see it. A few people who have tried on my Glass have been confused by the optics of the display because they expected to be focusing at something centimeters from your eye. Both the size of the display and it’s apparent focus distance make it seem like looking at a largish TV at 8 feet or so.

Q: So if you look at something about 6 to 8 feet away, the display on Glass will appear in focus?
A: That’s about right. I think this is a Good Thing because I believe there is a lot less eye strain at that focus distance.

Q: OK Glass, take a picture!
A: Sorry, that trick only works if Glass is active and I’m on the home screen. If that were the case, shouting commands might have worked. Nice try though.

Q: How does it feel? Is it comfortable?
A: I rarely notice Glass when I'm not using it. When I first got it, I tended to get headaches if I wore it for more than a few hours without a break. I’m told that folks who start wearing prescription lenses can experience the same thing if they don't ease into it. Now I'm used to Glass, and it doesn't bother me. I sometimes forget I'm wearing it, or not wearing it.

Q: Is it augmented reality?
A: That depends on the definition of Augmented reality. I think most folks associate AR with informational overlays on live video feeds. In that definition, AR involves looking through a device to see the world. You don’t look through Glass’s display to see the world. Glass feels a little bit more like closed captioning, or Picture-In-Picture for reality. Glass can provide information based on your location and other cues. I regularly use a Glass app called Field Trip, and it provides information about nearby landmarks as I travel.

Q: How long does the battery last?
A: In my experience, the battery life varies a lot depending on your usage. On my normal day, I have battery left when I get home. When I travel, I use it a lot, and I often have to charge it after 4 hours or so.

Q: How much do you wear it?
A: I wear it 80-90% of the time when I’m not home or at the office. Since I'm surrounded by computers and communications devices there, Glass seems redundant. I also don't wear Glass at Crossfit because I’m afraid of shorting it out with gallons of sweat, or smashing a kettlebell into it. I also don't wear it when I'm feeling extra introverted. Glass is a great way to meet strangers.

Q: What is the biggest change it has made in your life?
A: The biggest change is that a lot of strangers walk up and talk to me. The second biggest change is the ability to respond to text messages and email without touching anything. The ability to spontaneously take photos is a close third.

Q: When will it be released, how much will it cost, and will it do X?
A: I have no idea.

Q: Do you work for Google? Did they give it to you?
A: No, I work for Evernote. I paid for Glass out of my own pocket.

Q: Did you have to tweet to get it?
A: No, I'm part of a different group. I attended Google’s developer conference, Google IO 2012. Anyone who attended that conference had an opportunity to sign up to purchase Glass.

Q: I thought it had lenses.
A: That’s not a question. Also, it does have two sets of lenses that snap in: a clear shield, and a polarized sun shield. With the polarized shades, Glass looks like an especially intense pair of sunglasses. Fewer folks notice that it is Google Glass.

Q: How do you charge Glass?
A: Glass has a micro-USB port located a bit in front of your right ear, facing down.

Q: Has Glass gotten you in trouble anywhere.
A: Not really. A bouncer at a club in San Francisco was scandalized when I told him that it could take photos. He implied that photography in a nightclub was a huge breach of etiquette. I offered to wear Glass around my neck, and he was OK with that. Another time, a man walked up to me and said I couldn't wear Glass in a pub. When I asked him who he was, he admitted that he was just a patron and that he was messing with me. We both laughed and chatted about Glass.

Q: Do you wear Glass in the restroom?
A: I remove Glass from my face and wear it backwards around my neck in the restroom. Ever since Nick Bilton’s strange blog post and NYT article about Glass in the restroom, I've tried to avoid accusations of urinal photography. We live in strange times, but it seems like good etiquette.

Q: Oh, does Glass take photos when you wink?
A: As of XE12 (released 17 December 2013), it is possible to configure the second-generation Glass Explorer Editions to take photos when the user winks the right eye. You can read more about the feature in the wink help page. You'll note that they offer etiquette advice, and that it is considered an experimental feature. Not the way they arrive from Google! There is a piece of third-party software that uses the proximity sensor to detect a wink (I think) and trigger a photo. As convenient as this may be, I think it has a high creepy factor. I haven't installed it. I think this misconception around wink photograpny started with Nick Bilton's article. He implies that wink photography is a standard feature of Glass, even though it is a hack. I really wish he had asked someone first before writing about it in the New York Times.

Q: How do I know you aren't taking a photo or recording this now?
A: Out of the box, a photo or video will activate the display, which you can see from both sides of the prism. But I suppose you don't know for sure; I might have hacked Glass, or I might be wearing a wire or a hidden-camera bow tie.

Q: People standing in front of you can see what you're looking at?
A: Yes. The display looks quite small from the other side, but you can see it. If you were really close, you could probably read it. You would probably know if I was using Glass rather than paying attention to you.

Q: Tell me something ironic.
A: Many people with smart phones try to take covert photos of me wearing Glass. 

Sunday, November 17, 2013

What is Glass?

Note: The last few paragraphs were scratched and replaced to discuss the new GDK on 4 December 2013

When Steve Jobs introduced the iPhone, he called it a telephone, an iPod, and an internet communicator. One of the most common questions I get when I wear Google Glass is "What is it?" I wish I had an answer as concise as Steve's.

Glass isn't a phone, or an iPod. I think it qualifies as an internet communicator. But more than that, I claim that it is a tool for simplifying and speeding up the common interactions you might have with a smartphone or computer. Perhaps it will inspire computer interactions that don't yet exist. Like most devices with Apps, what it does depends a lot on the software you use.

The Pieces

If you tear open a Glass device like these folks, you will find a camera, a display, a bone conduction speaker, a touchpad, a shutter button, an accelerometer / gyroscope, WIFI and Bluetooth transceivers, and a CPU. It comes with a snap-in polarized sun shield too.

The display technology works by projecting an image into the prism which sits above the right eye. The images it creates are translucent; you can see right through them. The positioning of the display above the eye -- not in front of it -- means that you aren't trying to peer through it to see the world. The prism appears to have a photosensitive film on the side away from your eye to create a darker background for the display in bright light.

The bone conduction speaker is a tiny pill-shaped apparatus that touches the head near the right ear. It looks temptingly like a button. One of the curious properties of the bone conduction speaker is that in loud environments you can hear it better by plugging your ears. It also tickles just a little bit when it makes sound.

Using Glass

To begin interacting with Glass, you either need to tap the touchpad near your temple, or perform the Glass head flip. The head flip involves tilting your head up until the screen activates, a configurable behavior. With both the tap and the flip, the display shows the home screen and begins listening for the magic words: “OK Glass”.

If you say “OK Glass”, you can verbally select from a menu of commands: send a message, take a picture, record a video, get directions, make a call, start a video hangout, or take a note. Speak and Glass obeys. Some of the commands appear depending on what services you have connected, or how your phone is configured. The “Take a Note” command, for instance, can be handled by the Evernote Glass app, and it lets you dictate a note.

Using the touchpad, the same options are available, and you can also navigate the card-based interface of Glass. To the left of the home screen there are a collection of cards mostly related to the Google Now service. You might find a card with upcoming events on your Google Calendar, a card for the local weather, a card for stock prices, or a card offering driving times and directions to destinations recently searched for or for calendar events. These cards and their contents are contextually sensitive just like the Google Now cards on an Android Device, or in the Google Search app on iOS. They appear and vanish depending on what Google thinks is most useful. The card furthest to the left is the settings card, which shows the battery charge, and allows the user to configure Glass.

To the right of the home screen, there is a row of cards in reverse-chronological order, starting with the most recent. These are cards which come from your own interactions with the device, from communications, or from third party services. If you take a photo, you’ll find a card for it in the timeline. Did you search google? You'll find a card for that. Text messages and emails too. The interface feels like a long strip of film that you can click through one frame at a time, like an old-fashioned slide show.

Typically, each card responds to a tap on the touchpad. Depending on the card, it will either show a menu or another collection of cards that were metaphorically stacked. Text messages are a good example of stacked cards. In the timeline, only the most recent text is visible. Tapping on that message reveals a card for each message in the conversation. Tapping on any one of those cards offers a menu: reply, read aloud, call, delete.
You can see in the image above a diagram of the Glass interface stitched together from actual Glass screenshots. Click or tap on it to see a larger version. The home screen is where you usually start an interaction, and is one of the indications that Glass is ready for a verbal command. If you swipe forward on the touchpad, the next thing you would see is a text message, followed by a photo taken with glass, and finally a search for "will it rain today."

The triangular dog-ear in the top right corner of the message card is a clue that this is a stack of cards. If you tap on the touchpad while the text message is visible, if will dive into that stack of cards: a list of messages in that conversation. Follow the yellow arrows above. If you tap on any of those message cards, you are offered a menu.

The vertical stacking of the interface is a useful way to think about the UI. Swiping down on the touchpad will return the user to the next level up. From any of the menu cards, you can swipe back to the messages list, and from there you can swipe back to the top level timeline. From there, an additional swipe down will turn off the screen.

Incoming Communications

If you get a new message or interaction from an App, Glass will chime. If you respond immediately by tapping or performing a head flip, you will be shown the card associated with the notification. Depending on the notification, there will often be an “OK Glass” cue on the card indicating that you can address the notification verbally. When I get a new email or text message, I can say “OK Glass, read aloud”. Glass will then read the contents of the message to me. When it’s finished, I can say “OK Glass, reply,” and then dictate a response.

One useful UX trick that Glass uses is that it will display the text you dictate for a few seconds before performing an action. This gives you an opportunity to cancel a message in case there was a transcription error..

Photo and Video

In addition to the audio commands, you can take photos or a video by using a physical button on top of Glass. Photos and videos can be explicitly shared or pulled off of Glass using USB. In addition, once Glass is on a WIFI network and has a decent battery charge, it will automatically upload the media to a private Google+ album. You can choose to share or download the images from there.

The camera Glass has (at least the first generation Explorer Edition I have), is a wide-angle fixed-focus camera. I don’t really think that the camera compares to what you would find in an iPhone 5. However, Glass uses some computational photography techniques to create better photos that what the hardware normally would produce.


I've only used Glass for walking navigation, but it works really well. Walking navigation seems like a killer app to me. The map is continuously projected in the display, and is oriented in real time with your head motion. Since the map spins so that it orients where your head is aimed, there is no need to look at street signs. Just line up the arrow with the path and walk.

You look like a normal, purposeful human being using walking navigation on Glass. Compare that to folks trying to navigate with their smart phone. They walk with their heads either down, or looking for street signs. They walk ten feet in a direction before making a u-turn to go the correct direction. Glass is a nice improvement.


Glass offers two main paths for developing apps. The first is the Mirror API. To use the Mirror API, the app developer doesn't write code for Glass. Instead she writes server code. Your server interacts with Google servers which then act as a proxy for a user’s Glass device. The server and the Mirror API interact with JSON and HTML representations of timeline items through RESTful endpoints.

Since the app runs on your server, you can use whatever technology you want to implement your side. Google has example code written in a variety of different technologies.

Users enable an app that uses the Mirror API by authorizing them with a familiar Google authentication flow. If Google has approved an app, it can be switched on through the MyGlass Android app, or the Glass dashboard.

The second path for creating Glassware is writing an Android app. You can load apps using the traditional Android development tools and load them with a USB cable. Developers will want to heavily customize their app for Glass since there is no touch screen and most apps aren't quite ready for a 640 x 360 display. At the moment there is no simple distribution method for apps created this way. Glass doesn't yet come with the equivalent of the Play Store.

Update: Google has released a preview of what they're calling the Glass Development Kit (GDK). The GDK is an Android library that gives developers direct access to the Glass-specific elements of the device: the timeline, the cards, menus, and so on. It also enables Glass apps with real-time interaction.

Apps developed using the GDK are installed the same way the Mirror API apps are: through the MyGlass web interface, or the MyGlass Android app. Flip the switch, and the package is pushed on to the device and installed. Glass requires an internet connection to retrieve the APK.

Along with the GDK, Google and several third-party companies released GDK based apps. Google released a compass app. Word Lens released an impressive translation app that replaces text in a live video feed from the camera. And there are several other apps in categories like sports.

The GDK opens up a lot of new possibilities for Glass development. It offers more challenges too, since it takes careful work to get smooth performance and low energy usage from code run on the device itself. That's just the sort of thing I enjoy. I've already started having fun with the GDK.

Saturday, November 9, 2013

Sony NEX-6 Long Term Review

The time seems right for a long-term followup review to the Sony NEX-6*. Sony recently announced two new high end cameras using the same lens mount as the NEX-6, but featuring larger full-frame sensors. The new cameras are called the Sony Alpha a7 and the Sony Alpha a7r, and they appear to be targeting the professional photographer. These bodies both have large, high resolution sensors mounted in durable splash-proof bodies.
Colors of Burano
The Colors of Burano
I expect that the addition of these impressive-looking cameras on the high end of the spectrum will increase interest in the midrange E-Mount camera bodies. Why? I imagine that some folks will want to test out the NEX system with the expectation of later growing into the new full-frame cousins. Although the new a7 and a7r seem to have reasonable prices for their capabilities, they are still expensive, high end cameras.

The crop-frame NEX line of cameras also has a few minor advantages over the new a7 besides price. There are many more lenses on the market designed for the NEX than for the full-frame a7 and a7r. Sure, you can use a crop-frame E-mount lens (most of the lenses launched before the Alpha a7 and a7r qualify) with any E-mount camera, but those lens won't let you take advantage of the full sensor area of an a7 or a7r. Depending on what kind of photography you enjoy, you might discover that the most suitable lens isn't designed to fill an entire full-frame sensor.
Venetian Cafe
Cafe in Venice
On the other hand, there is no reason you can’t use a full-frame lens on a crop-sensor NEX camera. I personally never owned a full-frame Canon camera even though most of my lenses were designed for the full frame. I tell anyone who asks to spend more on great lenses rather than great camera bodies. You will probably find that you use a given lens much longer than a given body. The technologies in camera sensors change much more rapidly than the technology in the lens: just ask the folks who are using ancient Pentax, Leica, and other manual-focus lenses on modern cameras. You don't have to buy the best body to enjoy and get value from a fantastic lens.

Read on to see how the NEX-6 has treated me over the past ten months. All of the photos you'll see in this review were made with the NEX-6. Click on them to get a larger view.

Smart Phone Integration

In my first review of the Sony NEX-6, I noted how terrible the PlayMemories Android and iOS (iPhone / iPad) apps for the camera was. Maybe Sony heard my whining because several new revisions of the PlayMemories app have been released. Now I would say the apps are at least mediocre. That might sound bad, but it is a big upgrade over terrible. At least the apps work now. Now I can go to an event, take photos with my NEX, send them to my iPhone, iPad, or Android phone, edit them, and share them with just a few minutes of work. Like I mentioned in my last review, the camera makes fantastic photos. The extra steps involved in using the NEX instead of the iPhone 5’s built in camera is often worth it.
Big Shade
Big Shade
So how does the sharing to the Android work? In one method, you first go to the photo you want to share , hit the menu button, select “Playback”, select “View on smart phone,” and them select “This image”. At this point you can open the PlayMemories app, enter the password for your camera (for the first use only), and then wait a few seconds for the phone to connect to the Camera’s wifi network.

For the iPhone, you need to use the phone’s settings app to connect to the Camera’s wifi network before launching the PlayMemories app. For both platforms, the process takes maybe thirty seconds if it goes well. The Android app has crashed for me several times. I have also had issues when more than one of my devices connects to the same NEX-6. If more than one smart phone or tablet connects to the NEX-6, the sharing functionality seems to fail. It took me a while to realize what was happening. Bummer!

If all goes well, you will see a thumbnail of the image you were viewing on the camera. Tap the thumbnail to get a larger preview, tap it again to select it, and then hit the copy button to copy the photo to your phone's gallery. The app has a share button, but in my limited testing, it doesn’t seem reliable. Unless you can persuade the share button to work, you’ll have to go hunting in the gallery to find the photo you just copied over.

Lens Adapters

Another new development since my initial review is the RJ Camera "Electronic Aperture Canon EOS (EF, EF-S) mount lens adapter to Sony E mount", which allows me to attach my big beautiful Canon glass to the NEX. The adapter I ordered allows the camera to control the aperture, capture the EXIF data from the lens (focal length, and possibly some other data), and to perform autofocus. I purchased this adapter so I could use the variety of Canon EF mount lenses I already own. Note that there are several different competing adapters that allow you to connect a Canon lens to an E-mount body.
Harvest Tools
Sadly, the autofocus using this adapter seems limited to the contrast based methods (the usual NEX-6 phase detection seems to be disabled), and feels really slow. Occasionally, the autofocus just gives up. The autofocus feature of the RJ Camera adapter doesn’t provide a great experience. It is useful in certain situations, but I mostly manually focus my Canon lenses, relying on the focus peaking and the ability to zoom in on the live view.

I mentioned in my initial review that the Sigma 30mm lens only focused using contrast detection. The Sigma feels reasonably fast to focus. Don't count on getting similar autofocus performance on the RJ Camera adapter. The focus behavior with the RJ Camera adapter attached jumps to what seems like a series of coarse focus points before dialing in at a finer level. Sometimes the camera gives up before the entire process completes. I've mostly been using this functionality with My Cannon 100mm IS L Macro lens. Different lenses seem likely to have different results. The camera has a very difficult time focusing with my Canon 8-15mm L fisheye lens.Your mileage may vary, but it seems unlikely you will get the kind of focus speeds needed for action photography.

Luckily, the NEX-6 has two nice innovations to assist with manual focus. First, it offers focus peaking. Focus peaking highlights areas in the viewfinder which are in focus with a colored fringe. It seems to work by highlighting high-contrast transitions at the pixel level. If you don't have an extremely fast aperture, it is an easy way to verify focus — as long as there is an area of high contrast for it to identify.
Troll Door
Troll Door
The other tool the NEX-6 offers is the ability to zoom in to the live view from the viewfinder or rear LCD display. Press a button, zoom in, focus. This takes more time than the focus peaking, but if does offer higher precision. Note that with the RJ Camera adapter, the in-lens image stabilization of my Canon lenses is disabled while using the zoomed in live preview. Unless you have a very wide lens, you’ll find this behavior disappointing. IS would really help keep things steady while focusing my Canon EF 100mm f2.8L IS Macro lens.

My one final comment is that the tripod collar which came with the RJ Camera adapter didn’t have a long enough screw to securely tighten it to the adapter. I purchased a pile of washers and a longer screw at Home Depot for a few bucks. Also, once you remove the body from the adapter, nothing but friction holds the collar on. That’s OK, unless you get confused and try to hold the adapter by the tripod collar. Do that without the body attached, and you might drop your lens. Be careful!


In my original review of the NEX-6, I complained about losing the viewfinder eyecup while walking around the Magic Kingdom. I’m still a bit disappointed in that experience, but since switching my carrying system away from the Black Rapid, I haven’t lost an eyecup again. If you plan to hang your camera upside-down from the tripod mount, you might start losing $12 eye cups too.
Speaker Food
Elegant Food
In case you’re curious, I now use a PeakDesign Leash and also a PeakDesign CapturePRO. The Leash is a very versatile shoulder strap that can be rapidly removed or reconfigured to become a tether to attach to your belt. The leash can be made quite long too, which allows you to hold the camera weight across the shoulders rather than just around the neck like a tourist.

The Capture is a system which securely holds a camera to a belt or strap using a specialized (but compatible with arcs-swiss tripod heads) plate. It’s a great way to completely remove the weight of a camera from the shoulders, and you can tighten it so that the camera doesn’t bounce around when you walk or run.

Like my Canons, my NEX-6 has had some rough treatment from me. It has been bumped around in bags with just a light neoprene padding. It has swung from my shoulders, and been bumped into people. The camera has been drizzled on, had various tasty sauces spilled on it. And it has pretty much survived without complaint. In fact, there is very little visible evidence that it has had a tough life at all. The only exception was the dang viewfinder eye cup that I lost about ten months ago.
Daily Driver
Daily Driver
Occasionally, the kit lens will fail to register on the camera. It’s always been an easy fix though: turn off the camera, remove the lens, re-attach the lens. The poor plastic lens housing has been abused enough to have a bit of an excuse.  The 16-50mm kit lens spends more time on my camera than any other lens I own. It isn’t the best glass in the world, but it sure is light and versatile. Adobe Lightroom does a fine job of correcting many of it’s flaws.

Battery Life

The battery life on the NEX-6 hasn’t improved, but I did buy an inexpensive set of two Wasabi batteries with a wall charger and even a cigarette lighter adapter. That means I have a total of three batteries for the NEX, and that I can charge them rapidly without worrying about the finicky USB charging on the camera body (see previous review). The three batteries have been more than plenty to get me through any day. When I travel for three or four day weekend, I sometimes leave the charger at home. I can always use USB to charge it in an emergency.

If you’re in a situation where you need the camera ready to take photos instantly, DSLR style, you will probably burn through batteries more quickly than I usually do. If you plan to use it extensively throughout a day, I suggest having at least one extra battery on hand. Luckily, the Wasabi batteries are not too expensive.

The Keeper Rate

I’m still convinced that my NEX-6 has a much higher keeper rate than my Canon 7D, or my Canon 40D. That is, I feel that more of the photos I take with the NEX-6 are sharp. With the Canon 7D, I seemed to capture a certain percentage of my photos slightly blurry, usually due to camera shake. I’m almost convinced that the dang swinging mirror in the Canon is what ruined so many photos for me. I suppose that it’s also possible that the additional weight of the dSLR somehow contributed to shaky photos too.
Sunday Exercise
Sunday Exercise
Either way, I worry less about motion blur in my photos on the NEX-6. It still can happen, but I don’t feel the need to take 3 photos of every beautiful scene.


For the most part, I really like the Sony NEX-6. Yes, the apps for Android and iPhone still make me weep. I develop iOS apps for a living, so I might be more picky than the average user. That said, I fell much better mentioning the feature than I did in January. The apps work much better now, even if they still frustrate me.

Also, I sometimes miss the lightning-quick focus of the Canon 7D, especially when I’m manually focusing my old Canon Glass on the NEX-6. The battery life of the Canon was far better too. And I can’t complain about the easy access to the most commonly used settings through buttons on the camera body. Every time I have to navigate a menu to change a basic setting, I miss my 7D.
Crossing Dark Waters
Crossing Dark Waters - the Bay Bridge in San Francisco
I don’t miss the size and weight of a dSLR though. The automatic modes on my NEX-6 are far more advanced and far more useful than on my Canon 7D. The NEX does a great job of picking shutter speed and aperture without my help in most situations (as long as I'm not using an adapter), and that’s great.

Likewise, auto-ISO is perfectly acceptable on the NEX-6. In my opinion the camera makes the correct tradeoff in terms of shutter speed and ISO. And I rarely feel like I need to disable automatic ISO to get a shot — something I can’t say about my Canon 7D.

I also love the electronic viewfinder, which is usable even in the dark and allows you to zoom in before taking a photo. When I use an optical viewfinder, I feel like I’m using an antique. How will I know what a photo will look like without knowing how the sensor sees the world? I can’t believe I was ever concerned about the lack of an optical viewfinder.

Over the next ten years, I see no reason for the traditional dSLR to stick around. At the moment, they have a few advantages, but think we will see the same capabilities in mirror less cameras in a few years. Cameras with moving mirrors will seem just as quaint as those antique cameras with bellows and flash powder do today.

[Article updated November 10th to include a link to the Wasabi batteries]

*Moving Average Inc. is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Buying items through this link helps sustain my outrageous camera addiction and is much appreciated!

Tuesday, September 24, 2013

You've Learned from Professors. Now Learn from Entrepreneurs.

Gail Goodman
If you want to get a software company off the ground, you need role models who have built software companies. Although there are tons of technology conferences out there, very few cater to the business side of technology. Even fewer events are tailored for software businesses that aim to make money from customers rather than from investors.

If you want customers you're just wasting your time attending an event that promotes the idea of taking investment funds in order to accumulate non-paying users. Developing customers who value your product so much that they pay requires a different set of tools. Equally, you won't get much out of a conference about selling software if you'd prefer to make money by selling ownership of your business.

One nice place where a room full of folks who make and sell software meet annually is the Business of Software Conference. You'll find an international mix of people who represent businesses from one-man software shops all the way to international operations that take huge sums of investor money in order to make even larger sums of customer revenue.

No matter where you are in your software business, you'll find someone here who has been in the same place. Among friends and off-the-record, you may be shocked by how much information and support your fellow software entrepreneurs are willing to offer you.

As a bonus, the organizers of BOS have thoughtfully arranged to have an array of amazing speakers present to the attendees on entrepreneurial topics. They tend to have a lot of value. I still think back to talks from my first BOS in 2009. How often do you think about a presentation even from last year?

If you're a student and you'd like to sell your software (or perhaps sell more of your software), consider applying to my Business of Software Student Grant. The grant winners get a ticket that is currently offered for $1895.00. If you spend ten minutes writing or recording your application and get in, that's a return of something like $11,370.00 an hour. Except you won't actually get richer unless you take what you learn at BOS and use it to improve your business. Software isn't quite alchemy, even if some of the folks I've met seem to turn everything they touch to gold.

It's really easy to apply, and the benefits of developing friendships among fellow entrepreneurs are unmeasurable. You can find the details here. Hurry, applications are closing soon!