My first app released to the App Store, GoFest was released today.



You can find it there here.

It feels pretty good! GoFest is an app for beer festivals and will be hosting it's first festival in 2 weeks at New Orleans on Tap.



Rejection feels horrible doesn't it? It feels even worse when you could have avoided it so easily!

I submitted my first iOS app, GoFest, to iTunes Connect last week and just got notice back yesterday that it's been rejected because of it's metadata. What does this mean? Well, in my particular case, it's because I submitted my 4" retina screenshots were taken on a device running iOS 7. Doh! The app itself targets iOS 6, but because the phone I took the screenshots on was running iOS 7, they're not accepted.

I wish I had lots of devices around running all the various iOS versions available.. but alas, I do not. So, retake the screenshots. Resubmit. 

For my current client work, I recently needed to deploy an instance of the open-sourced Echoprint server to the cloud. Echoprint is written in Python and uses Tokyo Cabinet and Tokyo Tyrant as it's data store. I found the only cloud hosting option available (given these technologies) was a custom Amazon EC2 environment. Now, I'm not an EC2 expert. And I must say their documentation and tutorials are both confusing.. and frequently out of date. It was a bit of a headache to get everything setup and configured.. so I thought I'd document it!

Here goes, if you'd like to setup the Echoprint open-sourced project on an Amazon EC2 instance... here's what you have to do. (As of Aug, 2013)


Setup an AWS account and launch an EC2 instance


Create (or login to) an Amazon AWS account and go to the management console. Navigate to your EC2 dashboard and click 'Launch Instance'. From here, choose the 'Quick Launch' wizard. This will be fine just to get us going. A few things to note:

1. Create a new public/private key pair for this server. We'll be using this later to setup ssh access. If you can download it now we'll get back to this later.

2. Server type. I chose the default Linux AMI, this is fine.

From here, finish up and click 'Continue' and then 'Launch'. This should launch an virtual server for you.. which is what we want!

From here, we need to be able to access this machine from our local computer. We'll need to setup SSH access for this. Download CLI tools. Confusingly these are called either the CLI tools or the API tools from different places in the AWS documentation. They are the same thing. Jeez!

I created a local directory where I would store all my EC2 stuff. I recommend doing the same. In a terminal execute:


> mkdir ~/.ec2

Unzip and move the bin and lib files from the CLI tools you just downloaded into this directory.
Move the key pair we created and downloaded during the wizard into here too.

You'll also need to update the permissions on that key-pair file.

> chmod 400 ~/.ec2/saskey-ec2.pem

Make sure the following environment variables are set. You'll have to get your AWS access and secret keys from the AWS console to set these variables. They are necessary. On a mac:

> vim ~/.bash_profile

Add:

export JAVA_HOME=$(/usr/libexec/java_home)
export EC2_HOME=~/.ec2
export PATH=$PATH:$EC2_HOME/bin
export AWS_ACCESS_KEY=AKIA***********FRS55A
export AWS_SECRET_KEY=u/ixxR*****************XsTfZl64I/H
export EC2_PRIVATE_KEY=$EC2_HOME/saskey-ec2.pem

Save and close (esc -> :wq!)

> source .bash_profile

Confirm everything is working by running the following command. Get the instance id from your ec2 instance console. It should look something like this: ec2-xx-xx-xx-xxx.compute-1.amazonaws.com.

Now we can ssh into our new linux server!

> ssh -i ~/.ec2/saskey-ec2.pem ec2-user@ec2-xx-xx-xx-xxx.compute-1.amazonaws.com

You should see something like this:

       __|  __|_  )
       _|  (     /   Amazon Linux AMI
      ___|\___|___|

[ec2-user@ip-xx-xxx-xx-xx ~]$


Prepare This Server for Echoprint


We need to get the appropriate things onto the server:
Requirements for the server:
Java should be on there. run > java -version

Python should be on there. run > python --version

I'm running Python 2.6.8, so no need to worry about the simplejson

We need to install Tokyo Cabinet - run the following:

> wget http://fallabs.com/tokyocabinet/tokyocabinet-1.4.48.tar.gz
> tar -xvf tokyocabinet-1.4.48.tar.gz
> cd tokyocabinet-1.4.48
> sudo yum install gcc
sudo yum install zlib
> sudo yum install zlib-devel
> sudo yum install bzip2-devel.x86_64
> ./configure --enable-off64
> sudo make
> sudo make install

We need to install Tokyo Tyrant - run the following:

> wget http://fallabs.com/tokyotyrant/tokyotyrant-1.1.41.tar.gz
> tar -xvf tokyotyrant-1.1.41.tar.gz 
> cd tokyotyrant-1.1.41
> ./configure
> make
> sudo make install

Install the Echoprint project


Now we need the Echoprint project itself. I'm going to pull it from the Github project.

> sudo yum install git.x86_64

Go make a fork of the echoprint github project and git the github address. You'll need it below.


> cd /usr/local

> git clone https://github.com/your-git-id/echoprint-server

Now you've got the Echoprint project. Let's fire it up. Everything is installed and ready to startup.

Starting Solr (The Echoprint project uses Solr to index it's audiofingerprints)

> cd solr/solr
> java -Dsolr.solr.home=/usr/local/echoprint-server/solr/solr/solr/ -Djava.awt.headless=true -jar -DSTOP.KEY=YOURKEY -DSTOP.PORT=8079 start.jar

As a note, the logs will now output to: /usr/local/echoprint-server/solr/solr/logs


Stopping Solr (You may need this later)

> java -Dsolr.solr.home=/usr/local/echoprint-server/solr/solr/solr/ -Djava.awt.headless=true -jar -DSTOP.KEY=YOURKEY -DSTOP.PORT=8079 /usr/local/echoprint-server/solr/solr/start.jar --stop


Start Tokyo Tyrant

> sudo mkdir /var/ttserver

> sudo chown ec2-user /var/ttserver/
> cd /usr/local/sbin
> ls
nohup ttservctl start &
Starting the server of Tokyo Tyrant
Executing: ttserver -port 1978 -dmn -pid /var/ttserver/pid

Done

Start Echoprint API Server


sudo easy_install web.py
> cd /usr/local/echoprint-server/API
> nohup python api.py 8080 &

nohup because of ssh session




Make it Accessible to Outside world


If you'd like this accessible to the outside world.. to later do your importing/querying, you'll need to open up the port the Echoprint API server is running on. W
e have to tell our EC2 instance to do this. We do that by adding a rule to the security group's inbound rules, as shown below.



Now you're ready to import and query. Reference the Echoprint project documentation for instructions on how to do this!



I've been spending a lot of time exploring the start-up scene here in NYC. I've got my own, GoFest, that I'm nurturing. I've also been looking to get involved in another. Finding a team with the right mix of people and doing something that fits for you is difficult. As I've been exploring and searching for a start-up to join, I've been learning more about evaluating their business models for risk and potential success. There are lots of angles to come from in doing this, but I've found the easiest is to root everything from the most important central point. Growth.

It seems unanimous that what makes a start-up a start-up is growth. It must attain and maintain (while it's market/infrastructure allows) rapid growth. In this post (whose primary sources are The Lean Startup and Paul Graham) I will examine the definition of a start-up from the view point of Growth. In doing so, I'll cover a number of topics that I've found foundational.

  1. The types of business models capable of rapid growth.
  2. What does this growth look like? How long must it be maintained?
  3. How is it defined? Revenue vs user-base.
  4. Targeting and strategizing to reach rapid growth.

Let's look at these one by one!

Business Models Capable of Rapid Growth


Lots of businesses are started each year.. only a fraction can achieve the rapid growth start-ups aim for. What sets them apart? There's a definitive difference in the business model. For a business to achieve the rapid growth of a startup, it must do, at least, these two things.
  1. It must make something many people want.
  2. It must be able to reach and provide that something to almost all of the people wanting it.
How do startups achieve both of these? Frozen yogurt has recently exploded in popularity. Does selling frozen yogurt equate to a start-up? People everywhere are demanding more frozen yogurt. That satisfies #1. Opening a frozen yogurt shop around the corner from your house doesn't mean you can reach and serve everyone who wants to buy frozen yogurt, however. You will be limited to those who can and are willing to come into your frozen yogurt store. So, businesses like these do not satisfy #2 of the above criteria.

This is why businesses like Etsy, Pinterest and many more have been tagged as super successful startups. They are able to reach anyone who wants to use their service (and has access to the technology that enables them to do so - PC, mobile phones, etc..).

One further thing to note here is that rapid growth is enabled because each new customer who receives service should be, relatively, cheaply acquired. Your infrastructure must support each new user. But, as with Etsy and Pinterest, this can be done in an automated fashion where each new user takes minimal if not zero human interaction. There are still start-ups where this price is not as cheap. If you're selling tangible products you must have the inventory and the workforce to stock, package and ship the items you sell. Allowing for rapid growth with these businesses can be a bit more tricky, to scale things efficiently, but these are still start-ups if they can manage it. (Examples of this are Amazon or many of their acquisitions. Zappos, Endless, etc..)


What is Rapid Growth?


Understanding the business models capable of becoming startups helps to define what this rapid growth looks like. It seems that start-ups go through some pretty standard stages.

  1. Initialization. This is the declaration of intent to start a company and the initial work to lay some framework in place and get the right team of people together. There's probably not yet a customer-base at this point, so growth is typically zero.
  2. Product development. This is where growth is important. If you follow the Lean Startup methodology (the bible for startups) you should be getting an MVP in front of customers as early as possible and begin getting feedback. If you use this feedback to drive your product development, you'll hopefully see customer growth during this phase.
  3. If product development goes well, and something is built that customers love, a business can grow into a large company. It's when a business/business model reaches the limits of it's market or infrastructure capability that growth could start to slow. 
Reaching the final stage doesn't mean the end of growth. It may for a particular product or business model. But, this could be the point when more innovation happens and new markets/products/services are developed. The cycle can happen all over again!


Revenue versus User-base


This question would not have come up 10 or 20 years ago. Previously, a business MUST define their growth by revenue. If it wasn't making money, it wasn't growing. Today, there are more options.

Again, growth rules out. Because rapid growth is only possible by expanding your user-base, it is debatable about whether this should be prioritized over revenue or not. It seems that today, loyal non-paying users are valuable. Facebook has 80 million users. These users are extremely accessible for any contact Facebook wishes to make. This is valuable. Facebook was valued at $104 billion during it's IPO. While Facebook has yet to capitalize on it's user-base, there is trust that there is money in this model.

Still, a strong revenue model should be at the foundation of your business model if possible. The short customer feedback loop returns here. Testing out how to charge for you product or service as it's developed is the path to success. You can determine which features customers are willing to pay for and which one they're not.


Targets and Strategizing


These feedback loops with your customers are what should drive product development and inevitable drive your growth. This is how waste is reduced. If you're building something that customers love it is a success! If you're building something they don't want, it's best to find out as soon as possible. Get it or at least a minimum version of it in front of them as soon as possible. And find out if they think it's valuable or if your assumptions about how they'll receive it are correct.

If you can fake a feature to get customer feedback sooner, do it! I particularly liked the Zappos case study used in the Lean Startup around this topic. Instead of building a complete online shoe store, together with inventory, the founder of Zappos got permission from local shoe stores to photograph their stock. He put the photos online to find out if people were willing to buy shoes over the internet. (A core assumption in his business model.) For each purchase he would go to the store and buy the shoes, mailing them to the customer. This is not a sustainable model.. but it proved that people would buy shoes online. Valuable customer feedback. He could them move on to the next assumption that he needed feedback on, learning from the interaction with his customers as he went.

The take away is to, again, let growth drive your progress. Target the growth of your customer-base and the revenue it generates in the decisions driving your product development. If you take things in one direction and see no growth, it may be time to try something else. (The pivot! - as stated in the Lean Startup.)

In conclusion


This is all really just scratching the surface on this topic. After reading the Lean Startup and several of Paul Graham's posts on the topic, I really liked the idea of basing all thing on Growth. It's easy to ask at each decision-making point. Will this lead to growth? How can we test that quickly?



I've been using Heroku for a few months to host rails app that serves as the backend to a iOS app I've been working on. Heroku has been awesome because it's incredible easy to deploy your web app to a production environment seamlessly.

How does all of this work behind the scenes, however? Let's start with what Heroku calls the 'basic unit of composition'.

Dynos


These are essentially each a virtualized Unix container. You execute commands against them. They start out with a default environment (Heroku's Cedar stack). When your app gets installed onto one of these (as a slug.. more on this later!) commands are executed against it based on your app and it's dependencies.

Web vs. Worker dynos

Web dynos serve web requests. If your web requests trigger things like fetching data from remote APIs or uploading data to S3, these can potentially tie up your web dyno. Worker dynos come in here. You can use a strategy that delegates these processes to a job queue and worker dynos will pick things up from this queue. 

So, how do you scale these two types of dynos?
  • Use more web dynos to support more concurrent users
  • Use more worker dynos when your job queue starts getting backed up.

All manual activity like console sessions, or rake tasks trigger a 'one-off' dyno where they run in isolation. This includes all of your 'heroku run' this or that commands? 

The Job Queue

This image is a good visualization for how it works.



Backgrounding tasks or processes is a concept. Heroku doesn't define how to implement this. With RoR, I've used Resque successfully.

If the user request that triggers the job needs to await the response, you do need to come up with a strategy for getting it to them when the process finishes. Polling to see when their job has finished is generally acceptable.

Dyno Management

What manages these different dynos and makes sure they're in sync? Heroku uses it's dyno manifold to do this. When you deploy new code, all of your app's dynos are restarted. The dyno manifold also monitors your dynos for errors or issues and restarts or moves them accordingly. I think the way the dyno manifold is implemented is one of Heroku's secrets as I haven't been able to find documentation anywhere. They do say that it coordinates your dynos, manages the programs that operate your app and generally allows you to remain hands-off in how it works.

Slug Compilation

When you git push to Heroku, the code is received by the slug compiler. This transforms your repository into a 'slug'. These are precompressed and pre-packaged copies of your application optimized for distribution by the dyno manifold. When you scale your application by increasing web or worker dynos, these slugs are distributed and expanded on each new dyno as well.

Dyno Idling

One thing you'll care about immediately after beginning to use Heroku will be the dyno idling policy. If your app has only a single web dyno running (this is the default and free option), it will idle out - irrespective of the number of worker dynos. This means that if you have no web requests for 1 hour, your app is effectively put to 'sleep' (idled). 

Subsequent requests to an idled app will result in the dyno manifold for the app being signaled to unidle or 'wake up' your dyno. This can result in a delay of up to 15 seconds... sometimes longer. Pretty annoying and incentive to increase your number of web dynos to ensure one is always there to receive a request.

Check out the Heroku documentation for dynos as well. This is where I got most of my information!