Staticly: Editing Jekyll Sites on the Go

It seems as though I can't stop myself from starting new projects, even though I have yet to finish any old side projects. This month (November), the IU Computer Science Club is having a hack month, where we try and finish some sort of project in a month. At first, I was going to continue to work on one of my existing side projects, and then I thought, "Why do something rational like that". At the end of last month I gave a short talk to the group about the wonders of running a website/blog on Github Pages using Jekyll (like this one that you're reading right now). One of the things that I've felt was absent from my jekyll workflow was a way to edit posts and pages on the go using my iPhone or iPad. Most people handle this by symlinking their site directory into dropbox or another cloud storage system, and editing files from there. And that's great, and it works. But it's not quite as fun. And unless you have an always on server and some terminal-wizardry, you still have to get back to your computer to publish your site. For instance, if you publish on Github pages, your site only gets regenerated when you push a commit to your github pages repository. Not ideal.

So that is where the idea from Staticly came from. Staticly is an iOS application that will allow you to edit posts, pages, drafts, and other files in your jekyll site, from wherever you are. I'm approaching this goal in a few stages.

Stage 1

Stage 1 is focusing on github pages, standard jekyll sites. This unfortunately rules out Octopress, or anyone who generates their site locally and pushes to github pages. Right now, I have no way of generating the site on the iOS device and then pushing to github, so I'm letting github handle that for me. I'm using the Github API to download the repository locally so that you can work offline. Then, when you're ready to publish, Staticly creates a commit and pushes to github, triggering a rebuild of the site. I'm limiting this stage to github hosted sites because currently there is no good way that I found to actually git clone something in iOS. I'm using AFNetworking to interact with the Github API, and actually save the raw git data into Core Data. Originally (read 11 days ago) I decided to use RestKit over OctoKit because I thought that it would be easier since RestKit has built in support for mapping JSON back to Core Data entities. However, I just didn't have enough time to learn what was going on in RestKit and debug some errors that I was running into. So I switched over to a pure AFNetworking implementation yesterday, and I'm already running into fewer issues. That's not to say that RestKit and OctoKit are unusable, I just didn't have enough time to really wrap my head around them, and I might revisit them later. This implementation alone is probably worth two blog posts once I get it up and running, One for integrating AFNetworking with Core Data, and one for reimplementing the underlying git file structure in Core Data (for those familiar with the plumbing of git, yes I'm implementing blobs trees and commits because I'm crazy).

Stage 1 also includes a Markdown (and small parts of a Liquid) parser. My plan for stage 1 is to just parse the markdown pages into html, and not worry about implementing all of what jekyll does. My theory is that people know what their website layout looks like, so if we can generate and preview the actual content that winds up in the {{ content }} tag, that might be good enough for stage 1.

Stage 2

I'd like to extend the previewing capabilities of Staticly in stage two to handle more Liquid tags and control flow. I'm hoping to include support for more hosts in stage two. I know that a lot of people are hosting on heroku. Octopress support might be able to squeeze its way into stage 2 as well.

Stage 3

Finally in stage three you should be able to view your entire site in preview before generating it. This means eithe reimplementing pretty much all of jekyll, or finding a way to run jekyll on iOS (which would be no easy task).

Suggestions

Staticly is going to be a major work in progress over the next few weeks, and I'll try and post updates as they come. If you are a jekyll user, or you're thinking about it and Staticly sounds like something you might be interested in, I'd love to hear from you. I'm fully open to suggestions, and requests to beta test provided that you're running iOS 7. The code is posted here on Github

Gamification, Here's the plan

I've got a problem. A procrastination problem. Like most people I know, I have a slight tendency to put off the boring things in my life and do some other things instead. The problem is that this doesn't just apply to boring things. I have a tendency to put off things that could be really fun, in order to do something that is just normal fun. I have a whole bunch of habits that I want to have, but that I don't have. And that's my problem. And here's the solution.

In January, I read an article on Lifehacker called Gamify Your Life: A Guide to Incentivizing Everything. In the article, the author, Alexander Kalamaroff, mentioned that there wasn't yet an iPhone or Android app that he was using to track the system he designed. "Well hey," I thought, "I'm learning iOS development. Maybe I should contact Alex and ask him if he would be alright with me making an app centered around his system." Alex was fine with it, and thus an app was born. If you haven't read the post yet go and do it now. (Seriously, what are you still doing here?) I'm planning on starting development on this app soon, like really soon. Mostly because I want it but also because theres a really great thing that's going on with Gamification. I'm not saying that people need to be tricked into forming better habits, but if I was saying that, this is the app that I would build. You can bookmark this page or the tag page in order to get updates on the project. You can also checkout the link on the Projects page (check the navbar) for more information. If you have any questions or suggestions, hit the comments below.

On Wearable Computing

If you're anything like me (and most of the people in my generation), it's rare to find you somewhere without your smartphone. Along with my wallet and keys, my iPhone goes into my pocket when I get dressed in the morning, and rarely leaves my side. Some call it an addiction, I call it convenience. Having a mobile computer like this in my pocket at all times is something that was unthinkable when I started using a computer, way back in the good old days of Windows 95. My dad got a BlackBerry after that, and it was fascinating to me. That thing could read emails, probably go on the internet, and had a full keyboard (also most importantly it had BrickBreaker). And it went in your pocket. Look how far we've come.

If you haven't heard of Google's X Lab, I would advise you to hit your favorite search engine and look them up. They're the people that are giving you self-driving cars, teaching computers to recognize cats, and this. Google Glass is a project that X has been working on for what we can assume is a long time. It's a head-mounted computer, and allegedly, it's heading out to developers this month.

I'm excited to see what people can hack up at Google, but I'm more excited for what this means to wearable computing as a whole. Back in the day, wearable computing meant wires all over you, strange visors and huge battery packs. With the so called "smartphone revolution", these bits are no longer necessary. I'd love to see third party and other cell phone manufacturers start planning their answer to Glass. You already have the entire internet in your pocket. Hook up your phone to your display using bluetooth and you're golden. Create an API that works on all smartphone operating systems and you're even better off. My fear is that Google will keep Glass to themselves and restrict it to Android. That would, in my opinion, be a huge mistake.

I do have a few things that I think could be improved with Glass. First, input. Right now, it seems that you input data into Glass by speaking out loud. You can move around by tilting your head. That's all fine and good, but voice is not the way to go. At least not yet. I talk to myself all the time, it's how I think, but I don't want other people to hear me do it. The answer already exists. Chorded, handheld keyboards. With 5 keys, it's possible to generate 31 different combinations (trust me, I checked my math). That's enough for all 26 letters in our (english) alphabet, and a few extra keys to customize the keyboard like shift or enter. Better yet, turn some of the combinations into common bigrams (combinations of two letters) like "th" or "in". Make the keyboards bluetooth and you've changed the way people interact with computers as well as the way they interact with their new wearable computer.

Second, privacy. For some people, their privacy weighs heavily on their minds. The thought of sending an army of people out into the world with cameras mounted on their heads sends shivers down the spines of the privacy conscious. And they have a valid point. I don't want people to know everything about me just by looking. Don't even get me started on the videos and pictures of me that are bound to look bad that could wind up on the internet. But to me the solution is simple. What's a thing that we do every single day when we meet someone new? The answer is shake their hands (unless you're a germophobe, then maybe you bump elbows, I'm not sure). We use image recognition to see whether or not the user is shaking hands, and we send out a request to the other person's device. Access is limited at first, maybe they can see where you work or get your email address. But now these two computers know about each other. So when you're together, your computers can get to know each other better, just like you are. When you become friends on Facebook, you can see their latest status update when you see them again. When you follow each other on Twitter, same thing. Privacy is easily managed, as long as you're willing to help the user out.

One more point and then I'll be done. Safety, particularly when driving. The answer is again, image recognition. You just got behind the wheel of a car. Your computer knows that because it knows what a steering wheel looks like, and you can no longer hit up Facebook and Twitter to see what's happening. You can't send text messages, and your display is very limited. Maybe you can only see the map of where you're going when there are no cars in front of you. People are worried that because you have to focus on the screen right in front of you, your awareness of the real world will go down. And it will, but we can mitigate the risks. The possibilities are endless. Just wait and see what developers come up with when they get their hands on this.

Google is a second class citizen on iOS, and it matters

You may have heard about this little feud that's been going on between Cupertino and Mountain View California. You may have smug friends who tell you you really should be using an iPhone. Or maybe you have friends that tell you to ditch Apple's closed platform and come to Google's "open source" operating system.

Our day to day activities are now determined by our mobile phones. We keep our calendar appointments, check our email, check our social networks, and pass idle time on the small devices sitting in our pockets. Recently, the choice between using an iPhone and using an Android phone has become a choice of ecosystem. On Apple's side, you have the iTunes store with more content you could dream of, the App Store, and the stock iOS apps. On Google's you have Gmail and it's assorted calendar and contact options, and the Google Play store for apps and movies and more.

You may have also heard about the recent iOS maps "debacle". With the release of it's latest software update, Apple decided against using mapping data from Google, and began using their home-grown solution. Millions of people will tell you that it was an awful choice, and others will tell you that it was fine. Then Google came out with a Maps application of their own and the internet made it sound like this was the moment everyone in the whole world had been waiting for. Bloggers everywhere switched from the awful Apple application and found salvation in the Android-esque offering from Mountain View.

And since they had already started writing their articles about how they had switched away from one stock iOS app, many more blog posts, like this one from Business Insider, were written about how they seldom used any of the stock iOS applications any more. Many writers are heralding the fall of the stock iOS application as a triumph for Google and total loss for Apple.

Guess what? Neither company cares. Google is in the business solely to serve advertisements to your eyeballs and collect data to make those ads "better". Apple is in the business to sell hardware, and doesn't care if you never open any apps aside from the phone and messaging apps. And here's the kicker, even if you find the feature set of the stock applications to be limited, the integration is what makes them killer. When everything you do on your phone ties together, the experience is better. Links on iOS are always going to open in Safari. Email addresses are always going to pop you into the Mail application. Addresses are going to take you to the maps application. Ignoring the fact that Chrome is an overall slower browser on iOS because of the limitations on UIWebViews, I want to use the browser that other links are going to open in, so that all my browsing is in one place. I've used other mail applications that had better integration with Gmail, but I came back because the integration wasn't there. At the end of the day, the "inside-out attack" Google is making according to Business Insider will fail. Mobile phones are operated with one hand, in short time spans, and that's when the integration with the core operating system is key.