Comments Off

Challenging Established Dogma

I’ve been a longtime believer in the power of challenging established dogma and authority. Just ask my mom – she’ll tell you what a little pest I was growing up.

When I was young, my grandmother was a huge influence on me and encouraged me to question the authoritative aspects of society. Later, I met a high school teacher who introduced me to literary critical theory, which kind of sparked everything about who I am today.

So it’s with some disgust that I recently realized that I have become an “authority” in my field. Ew.

Let’s back up a minute.

There is a concept in psychology called the Dreyfus model of skill acquisition. It’s awesome – I think about it a lot when teaching, because depending on my audience, I’ll present the same material differently. For example, beginners in a field tend to see things through a lens that polarizes solutions as “good” or “bad” depending on how those solutions adhere to principles taught by experts. Beginners adhere rigidly to these principles. As learners progress in their field – towards competence, proficiency, and expertise – they learn that knowledge is not absolute and that the things experts teach are only well-informed opinions. Solutions to problems aren’t “good” or “bad” based on how well they adhere to a principle, but rather how well they solve the problem.

This is really useful to me as an educator. If I’m speaking to a beginner, confidently making generalizations will help them learn the fundamentals. Beginners need simplified models. Later, with intermediate students, I try not to make these generalizations because they’re no longer useful – I try and show them that what I teach is really just my opinion, and that they ought to question it. But if I try and tell someone that a fundamental principle is just an opinion too early, they won’t have confidence in what they’re learning. It’s a tricky balancing act.

So anyway, I came to the realization that I am part of “the man” of iOS developers – pushing my dogma wherever I write – when a reader wrote in with a question. (I am publishing their question here anonymously with their permission.)

I was reading your blog post here: - and had a question. You mention it would be "very, very bad" to make the UITableViewCell the delegate/dataSource of the UICollectionView, but you give no reasons. Why?! (but seriously, I am very curious - what are your reasons for that declaration?)

I look forward to hearing from you!

What an excellent question.

Why is it excellent? Because here in this moment, we see them question the established dogma of iOS. I wanted to encourage the person who wrote me to ask more questions like this one, so I gave them a thorough answer.

Thanks for writing me with this question. I think that questioning dogma is important in our field.

In this case, the dogma is something fundamental to the way that Apple, and most of the community, recommend architecting iOS apps: Model-View-Controller. In MVC, all objects are classified as either a model, view, or controller, and only one of those. So a view isn’t a controller, etc. In general, controllers serve as datasources for views because they have access to the model objects, which views do not. These controllers mediate the conversation between views and models.

So a view acting as a datasource implies that the view has access to the model data, which should not happen in MVC.

Why does this matter? Well, MVC isn’t something extra to be added to app code. It’s a framework that restricts what we can do. It’s certainly possible to write an entire app using a single file, or only a few classes, but we don’t do that because it makes it hard for us to reason about what code lives where. It may be more convenient or faster to write code using fewer, larger files, but maintaining this code is very time-intensive. So we use MVC to restrict how we structure code.

Sometimes, it’s convenient or even necessary to break the rules of a framework like MVC, but we should only do so as a last resort, with solid justification, and proper documentation about our reasoning.

Instead of just saying “because of MVC”, I chose to give an opinion. Remember, that’s all rules are – opinions – and opinions, no matter how well-informed they are, should be questioned. Especially in our field, where the environment that opinions are formed in changes so quickly.

I find helping developers learn and grow to be incredibly rewarding and I’m always trying to improve my teaching skills. If you’re a blogger or an OSS contributor or you answer questions on Stackoverflow, remember the Dreyfus model of skill acquisition. Sometimes a generalized rule is what a beginner needs, but eventually, those beginners are going to start challenging those generalizations, and that is awesome. Do not discourage this behaviour.

Comments Off

Your First Swift App

This morning I launched my new Swift book, Your First Swift App. It's a work-in-progress that will be updated as I write more chapters; currently, the first three of eleven are finished. All of the code is available on GitHub and will be updated as I go. Anyone who bought the older version for iOS 6/7 with Objective-C should have already received a coupon for a free copy of the Swift version.

This has been a struggle for me. I actually expected the book to be completed by the time Swift hit 1.0, but I struggled a lot over the Summer with depression. Only over the past few weeks have I gotten the motivation to continue. I'm grateful to my wife and friends for supporting me through the past six months.

To the degree that you're comfortable helping me spread the word about my book, I would be grateful for any tweets or blog posts about it. Thanks a lot, everyone.

Comments Off

Anyone Can Learn

I've started a project called "AnyoneCanLearn". You can read the information there for the goals and values and everything; this blog post isn't about promoting it or explaining it. It's about providing my motivation.

For a long time, I've been asked questions about iOS development. People email me or tweet at me all the time because they have questions about one of my books or open source projects. I love getting these emails and I try my best to respond to each one in a timely manner. Usually, though, if the question is novel, I'll suggest asking it on Stackoverflow. That way, others can benefit from the answer I put there. My common refrain has been "open source your question so they can open source the answer", and I've thought this was a good idea.

The other day, I asked a question on Stackoverflow about a topic that I am a beginner in. And one of the comments struck a nerve with me. It was unconstructive and inflammatory, and it was from someone with over 60k reputation points on Stackoverflow. I felt pretty hurt, and I've grown thick skin.

I put myself in the shoes of a beginner asking their first question online and imagined the kind of attitude that they'd get from developers. A friend on Twitter pointed out that it's even worse for women. And I got thinking. Wouldn't it be awesome if there were a place that provided resources for how to ask good questions? For how to answer questions with respect? And just generally how to make being a software developer a more awesome experience?

After talking it over with my wife and some friends, I decided to pull the trigger. Maybe this will be a flash in the can and fizzle out, but I really do hope that it becomes something that helps learners and teachers contribute to their communities in more positive ways.

Comments Off


So yesterday happened. This happened. Ugh. Whatever. Apple’s made bad decisions before and they’ve survived.

But this is not a post discussing the Watch. Well, it is, sort of. I want to talk about the event itself.

Apple hyped the shit out of this. Their invitations were sent out indicating that this was going to be held at the Flint Center, where they announced the original Macintosh. It’s a bigger venue and it invokes bigger kinds of product announcements. OK, fine.

Not only did Apple have a livestream (of sorts), they also had their very own liveblog about the event. That’s a first. Whatever.

The presentation starts with an Ok Go-esque inspirational video describing how we can make the world a better place (together!). I mean, I’ve seen these kinds of videos before – they make you feel good about buying Apple stuff and working on their platforms. Just like the one that shows how blind people can finally use their iPhone to go for walks in the forest. Standard fare.

So then the event really starts.

Blah blah iPhones blah blah Pay blah. Whatever, everyone’s at this event because they want to see what Tim Cook is going to announce that will change Apple’s history. They want to see the next chapter in their story, or whatever. So at the “end” of the presentation, Tim Cook does “one more thing…” And it’s here in our story that I begin to have a problem. But I’ll finish recounting the events, first.

Blah blah Watch blah blah Pay with Watch blah.

So it’s a wearable that does … what the other wearables do. (I promised myself I wouldn’t complain about the goddamn watch – that’s another blog post). Near the end of the presentation, Cook says something… interesting.

So now, the foundation of Apple is built on the best personal computers in the world, the Macintosh; the best tablets in the world with iPad; the best phones in the world with iPhone …; and now adding Watch.

The foundation of Apple. The foundation of Apple. Really. You know how something becomes the foundation of a company? By being a unparalleled success. Like those iPods that you kind of failed to mention there. The Watch isn’t even launching this year and you’re saying that it is now part of the foundation of Apple? Right up there with iPhones?

Uh huh.

OK, well at this point U2 comes on and I turned off the livestream. But I was thinking about this. About what Tim Cook said there, the hype, the anticipation, everything. And I got a bit upset.

Tim Cook used the “one more thing…” line that Steve Jobs was known for. It’s been about three years since Jobs death, which I think is a bit soon, but it’s his choice to use it. What bothered me, though, is that invoking Jobs’ words was just part of the large machine Apple designed to hype up this announcement. He knew that nerds would go crazy over those three words, so he used them. Regardless of whether or not you hold Steve Jobs in high esteem, the decision to use those three words is a calculated move designed to increase people’s awareness of this product.

And that’s when it kind of hit me.

Apple is just a company.

I like Apple as a company. They make fantastic products. They run their company in ways that I admire. But more than that, I had always kind of thought that while other companies were just in it for the money – Samsung is an easy target here – Apple was in it for something else. I was tricked into thinking that Apple’s motivations were somehow more noble than those of Samsung. But they’re not. They’re both just companies and they both just want to make money.

Those videos I mentioned earlier? The ones that make us feel good about being iOS/OS X developers? The ones that make us feel good about buying Apple products? They’re only there because Apple wants to make more money. That’s all. Like, “hooray blind people use iPhones” and everything – I’m really glad that Apple makes their devices accessible – but I don’t really believe Tim Cook’s assertion that they don’t consider the ROI of accessibility. Not anymore.

So yesterday happened. Apple showed its cards, the whole of Twitter exploded in one giant nerdgasm, and I realized that they’re just a company like any other.

And it broke my heart a little.

Comments Off

Shooting Film

I’ve been blogging recently about photography, and a few readers have asked me questions about the tools that I use to make photos. I think it’s an idea worth exploring, and I’d like to explain (not justify) my reasoning for relying on film photography.

Many film photographers will insist that film has a different “feel” to it – that the images captured with it are inherently different from those captured on digital. I think they’re right, but that’s not the reason I choose to use film.

The premise is that you experience the world differently when you hold a camera in your hand. It’s something I believe most – probably all – photographers would agree on. Taking a walk by yourself is intrinsically different from taking a walk with a camera. You see things in new ways. You see things you wouldn’t have seen. You’re looking for them now. Everything feels different.

So let’s accept that premise for this post – that being able to capture a photo changes the way you experience your surroundings. I think a logical extension of this premise is how you are able to capture a photo changes your experience in the same way that being able to capture a photo does. Let me explain.

Ansel Adams, a famous American landscape photographer, used large format) cameras. These were large wooden boxes that held film that was 4x5” or larger and sat upon sturdy tripods. In order to take a photo, he had to carry all that gear with him. Let’s compare the cameras he began using in the twenties with cameras people began using 60 years later, when single lens reflex (SLR) cameras started to gain popularity. A handheld device capable of capturing photos on (typically) 35mm film. Much, much smaller than the equipment Adams was using.

It isn’t a stretch to believe that the different equipment would enable the same photographer to see their environment differently. 4x5” film is very different from 35mm film, as are the cameras using those two formats. Had Ansel Adams used 35mm SLRs, it’s likely he would have captured different images than the ones he did. And it follows that his experience of the world would have been different, too.

To be clear, I am not saying one medium is better than the other. They are simply different, each with their benefits and drawbacks. At the end of the day, they are simply tools. Consider a painter leaving in the morning to capture a beautiful sunrise. Whether they choose oil paints or watercolours will change the outcome of their painting – they’ll be different, but one is not intrinsically better than the other.

So far, we’ve established that the kind of camera you carry with you affects your experience of the world. However, discussion up to this point has varied between film formats. What about changes in formats like film vs. digital? That’s an interesting conversation.

Like different sizes of film, differences in physical media affect the way a photographer experiences the world. Despite having similar uses, the two media are very different, each with their owns set of pros and cons. The same scene captured with film and digital cameras will result in very different images. For example, last year in Mexico, I took the following two photographs within minutes of each other. One with a Canon 5D Mark III, and the other with a Leica M6 on Portra 400 film.

These photographs aren’t just different because of their media, they’re different because when I held the cameras to my eye to capture those photos, I saw the light differently. The medium affected how I chose to make those photos.

So here we get to the fun part. If the tools photographers use affect the way they see the world, it’s not unreasonable to assume that a photographer familiar with different media might prefer one medium over another. And it’s certainly reasonable to imagine a photographer whose preferences change over time. Maybe they prefer to walk around with digital one day and film the next. Or with T-Max 400 (black and white) film one day and Portra 400 (colour) the next. Don’t even get me started on the differences in the types of cameras in the same medium (digital SLRs, rangefinders, bodies with live-view screens (or EVFs), point-and-shoots, smartphones, the list goes on).

So yeah. People ask me why I like shooting on film, and my answer is that film photography is inherently different from digital photography. The different tools affect how I see things. Sometimes I want to see things with film. Sometimes digital. It depends on my mood.

It’s chic to make fun of people who prefer to write on typewriters, or in Moleskin notebooks, or in “distraction-free writing environments” (this post was itself written in iA Writer). But I don’t have a hard time believing people when they say that the types of tools they use affect how they produce their work. I’m going to continue to use film for as long as they keep manufacturing it.

Comments Off

Exploring UIAlertController

This morning, I was working on the sample app for Moya, a network abstraction framework that I’ve built on top of Alamofire. I needed a way to grab some user text input, so I turned to UIAlertView. Turns out that that’s deprecated in favour of UIAlertController. Hmm.

Looking around the internet, there weren’t very many examples of how to use this cool new class, and the documentation was sparse at best. Let’s take a look at the high-level API and then get into some of the nitty-gritty. (I’m going to write this in Swift because I am not a dinosaur.)

UIAlertController is a UIViewController subclass. This contrasts with UIAlertView, a UIView subclass. View controllers are (or at least, should be) the main unit of composition when writing iOS applications. It makes a lot of sense that Apple would replace alert views with alert view controllers. That’s cool.

Creating an alert view controller is pretty simple. Just use the initializer to create one and then present it to the user as you would present any other view controller.

let alertController = UIAlertController(title: "Title", message: "Message", preferredStyle: .Alert)
presentViewController(alertController, animated: true, completion: nil)

Pretty straightforward. I’m using the .Alert preferred style, but you can use the .ActionSheet instead. I’m using this as a replacement for UIAlertView, so I’ll just discuss the alert style.

If you ran this code, you’d be presented with something like the following (on beta 7).

Weird. The title is there, but the message is not present. There are also no buttons, so you can’t dismiss the controller. It’s there until you relaunch your app. Sucky.

Turns out if you want buttons, you’ve got to explicitly add them to the controller before presenting it.

let ok = UIAlertAction(title: "OK", style: .Default, handler: { (action) -> Void in
let cancel = UIAlertAction(title: "Cancel", style: .Cancel) { (action) -> Void in

This is worlds better than UIAlertView, despite being much more verbose. First of all, you can have multiple cancel or destructive buttons. You also specify individual closures to be executed when a button is pressed instead of some shitty delegate callback telling you which button index was pressed. (If anyone out there makes a UIAlertController+Blocks category, I will find you, and I will kill you.)

If we added the above configuration to our code, we’d get the following.

Way better. Weird that the message is now showing up. Maybe it’s a bug, or maybe it’s intended behaviour. Apple, you so cray cray. Anyway, you should also notice that the “OK” and “Cancel” buttons have been styled and positioned according to iOS conventions. Neato.

What I needed, however, was user input. This was possible with UIAlertView, so it should be possible with UIAlertController, right? Well, kinda. There’s an encouraging instance method named addTextFieldWithConfigurationHandler(), but using it is not so straightforward. Let me show you what I mean.

alertController.addTextFieldWithConfigurationHandler { (textField) -> Void in
    // Here you can configure the text field (eg: make it secure, add a placeholder, etc)

Straightforward. Run the code, get the following.

The question now is this: how do you, in the closure for the “OK” button, access the contents of the text field?

There is no way for you to directly access the text field from the closure invoked when a button is pressed. This StackOverflow question has two possible answers. You can access the textFields array on the controller (assuming that the order of that array is the same as the order which you added the text fields), but this causes a reference cycle (the alert action has a strong reference to the alert view controller, which has a strong reference to the alert action). This does cause a memory leak for each controller that you present.

The other answer suggests storing the text field that’s passed into the configuration closure in a property on the presenting controller, which can later be accessed. That’s a very Objective-C way of solving this problem.

So what do we do? Well, I’ve been writing Swift lately, and whenever I come across a problem like this, I think “if I had five years of Swift experience, what would I do?” My answer was the following.

Let’s create a local variable, a UITextField? optional. In the configuration closure for the text field, assign the local variable to the text field that we’re passed in. Then we can access that local variable in our alert action closure. Sweet. The full implementation looks like this.

var inputTextField: UITextField?
let alertController = UIAlertController(title: "Title", message: "Message", preferredStyle: .Alert)
let ok = UIAlertAction(title: "OK", style: .Default, handler: { (action) -> Void in
    // Do whatever you want with inputTextField?.text
let cancel = UIAlertAction(title: "Cancel", style: .Cancel) { (action) -> Void in
alertController.addTextFieldWithConfigurationHandler { (textField) -> Void in
    inputTextField = textField
presentViewController(alertController, animated: true, completion: nil)

I like this a lot. It avoids polluting our object with unnecessary properties, avoids memory leaks, and seems pretty “Swift”. I’ve created a GitHub repo that I’ll keep up to date with future betas, etc.

So yeah. The new UIAlertController has some great benefits:

  • It’s explicit
  • It conforms to iOS composability conventions
  • It’s got a clear API
  • It’s reusable in different contexts (iWatch vs iWatch mini)

The drawbacks are:

  • It’s unfamiliar

As we march forward into this brave new world of Swift, we need to reevaluate our approaches to familiar problems. Just because a solution that worked well in Objective-C might work OK in Swift doesn’t make it a solution well suited for use in Swift. As developers, we should keep an open mind about new ideas and experiment. The way I look at it is like this: right now, the community is pretty new at Swift. We’re racing in all different directions because no one really knows what the best practices are, yet. We need this expansion in all directions, even if a lot of those directions are going to turn out to be bad ideas. If we don’t throw it all against the wall, we won’t figure out what sticks.

So next time you try and do something and get confused because Swift is unfamiliar, try all kinds of things. Could be you end up creating a brand new convention that’s adopted by the whole iOS community for years to come.

Comments Off

Copenhagen/Warsaw Tour

Last week, my wife and I returned from a trip to Copenhagen, then Warsaw. I spoke at two meetups in the cities, got to visit some new places, and take some photos. It was a really great trip.

In Copenhagen, I gave a rendition of my Solving Problems the Swift Way presentation at a GotoNight. Great attendance with some excellent questions. Really nice city – we stayed at an Airbnb very close to the city centre.

The following week, we went to Warsaw and I gave a talk on ReactiveCocoa at Mobile Warsaw. Really cool venue – sort of an outdoor cafe that got pretty chilly later in the evening. Robb Böhnke was also presenting and gave an inspiring talk about iOS accessiblilty.

Returning home after a week-long, pretty exhausting trip, I fell asleep on the plane. When we moved to Amsterdam, I looked forward to getting to know the local Appsterdam folk. I didn't anticipate having the opportunity to travel about Europe and give presentations to different local groups. It has been one of the most gratifying aspects of my time here so far.

Comments Off

Photographic Rut

Lately, I've been feeling like I'm in a bit of a photographic rut. Not taking as many photos, not developing them as quickly, and not posting anything when I do. It's been going on since about April, which is a shame. This past week, though, I've made an effort to get out there and take some photos. It's been hard, since I've been sick for the past four days, but fresh air helps, even if it's just a ten-minute stroll around the block. And I always make sure I have a camera with me.

There's a camera store nearby that has old film cameras. Has, not sells, because a lot of them aren't for sale. Old Leica IIIf's and the like. Prestine-condition. I often go in just to admire them.

This week, Ashley and I made a point to travel up the Haarlemmerbuurt, where there's another awesome camera store. I like looking in the used section, since much of my current kit is older, and I just like to admire the nice lenses. Well, this time, I saw something interesting.

I could only see it from straight-on, as it was in a glass cabinet, surrounded by other camera things. It looked like my Leica M6, but didn't have a film-advance lever, so it was digital. From the outline of the body, it wasn't an M8. But it didn't have a red Leica logo on the front. Hmm. Could be a Monochrom, but an M9 seemed more likely.

Actually, an M9-P. I asked how much. The price was more than reasonable (there's some slight brassing around the top and bottom plates). I asked to look at it. Perfect working condition.

We left the store and went to a café, where I talked it over with my wife. She knows how badly I've wanted a digital Leica, and we talked about the pros and cons, examined our finances and upcoming committments, and finally decided to pull the trigger.

New gear isn't a reliable way to get out of a rut. I know that from experience. However, this has been on my list for a long time, and things just kind of lined up. I've looked forward to the day I could take a photo with a Leica and share it online within minutes.

My first impressions of the digital Leica shooting experience are good. Better than my Fuji X100S. But different from shooting film. Not better or worse, but different. I'll probably continue to shoot 50/50 film and digital.

I'm really looking forward to exploring new places with my new camera. We've got upcoming trips to Spain, Italy, and Russia, and I can't wait.

Comments Off

Solving Problems the Swift Way

Recently, I was asked to speak at SwiftCrunch, the first ever Swift hackathon. I gave a talk on solving problems using idiomatic Swift; that is, how do we solve problems "the Swift way"?

What's really key – fundamental to both my presentation and my belief about Swift – is that we, as a developer community, are going to face problems in Swift that we are already familiar with. The first time you go to implement UITableViewDataSource in Swift, you're going to be solving what's likely a problem you've solved before in Objective-C. This time, you're using Swift. The naïve approach to solving this familiar problem would be to use a familiar solution, but that would be a missed opportunity. Swift presents many new language features and many new ways to solve existing, familiar problems. It would be a shame not to explore those new solutions to see if maybe some of them are better than the Objective-C ones.

So here's my presentation. My slides are online, too. Please send any feedback you've got!

/Ash Furrow /Comments Off
Comments Off

Sharing is Selfish

OK, OK, not all sharing is selfish, of course. A more accurate headline would have been Sharing Can Be Selfish, but I could have written Four Mind-Blowing Reasons Why Sharing Makes You Rich, so count your blessings.

So let's talk about the selfish benefits of sharing knowledge. To do so, we'll have to define what that actually means.

Sharing knowledge. Hmm.

I think that the benefits of sharing knowledge for a price are pretty clear: you get paid. This includes people who write books, professional scientists, and those creepy bastards at Experts Exchange. So for the sake of argument, let's limit our discussion to freely sharing knowledge. This would include, for example ...

  • Releasing software under an open source license.
  • Contributing to existing open source software.
  • Posting answers to Stack Overflow questions.
  • Writing blog posts, even if your blog doesn't have ads.

I'm sure there are others, but these are the big ones.

(Aside: what's really interesting to me is that the first two, probably the most important ones, are freely giving away the primary product of software development. To my knowledge, this is truly unique to the software development industry. Designers don't typically open source their PSDs, civil engineers don't open source their building designs, and lawyers don't open source their law research. So when we talk about freely sharing knowledge, I think that it's awesome that this is occurring in the software development industry at a rate that is unprecedented in human history.)

So what are the benefits of sharing? What's in it for you? I've narrowed it down to four key benefits.


First up is the most obvious: exposure. When you share what you know, you put your name out there. You get Twitter followers. You get GitHub stars. You get a higher PageRank. Maybe some new RSS subscribers to your blog. Who knows. The point is that you get your name out there.

Why does this matter? Well, never underestimate the power of ego, but let's talk about tangible benefits. To do so, let's consider some examples.

My former employer, Teehan+Lax, gives away tools that they've developed. Primarily, the design source files for designing iOS interfaces. These have been used by thousands of developers all over the world and have helped make Teehan+Lax a household name of iOS design. These templates are even integrated into Sketch 3. Now, when someone out there needs an amazing app design, they know exactly who to contact.

Too abstract for you? OK, well consider and NSHipster – two sites that were created in order to give away knowledge to the iOS developer community. Their organizers are now able to use their popular sources of information in order to promote books that they've written. By sharing some knowledge for free, they can use their sites to sell more of their books. Super-awesome!

Validation of Ideas

This is actually one of my favourite benefits of sharing knowledge. When you share an idea, there are precisely two scenarios that may unfold:

  • Your idea is awesome. You thought so already, but now you know for sure.
  • Your idea could be improved. People point this out, and now you've learned something.

Over time, by exposing ideas to the world, you end up with better ideas. If you open source a component of an app that you've built, and someone points out a flaw, then your app just got better. Nice.

There is a danger, of course, in sharing ideas like this. What if someone really hates your idea? You could end up being ridiculed. After all, the internet is full of terrible, terrible monsters.

This will always be a danger, but you don't have to grow thick skin in order to be confident in sharing your ideas. Just follow these three steps in order to create a bullet-proof idea:

  1. State your assumptions.
  2. Explain what you tried first, and why it didn't work.
  3. Explain what you ended up with, and why you think it's the best solution for your problem.

By explaining how you ended up at an idea, other developers are very likely to offer constructive criticism. Maybe one of your assumptions is incorrect, or maybe your solution isn't the best because you were are unaware of a helpful API. If you explain how you arrived at a solution, then others can explain where you went wrong. It's like showing your work on math homework – even if you end up at an incorrect answer, at least you get partial credit for using the correct process.

In any case, this three-step process brings us to our next benefit of sharing.


Here is a key one, which heavily influences me when I teach. To illustrate how this benefit works, let me tell you a story.

Last year, in the lead-up to the iOS 7 launch, I wrote some blog posts for Teehan+Lax. One of them was about the new custom UIViewController transitions API. This was a topic that I had identified as a great opportunity to write about: there was no WWDC sample code demonstrating how to use the API and, frankly, the WWDC presentation was very confusing.

I spent time investigating how to use the API, to understand its design and to test its boundaries. We released the blog post and its accompanying GitHub project, both of which became important sources for someone learning this new and confusing API.

Importantly I was now an expert in this API. Later, when I had to write a custom view controller transition for a project at work, I was able to draw upon that knowledge and complete the task quickly and accurately.

Often, when I begin to write about a subject, I don't really know what I am talking about. But in trying to explain the subject, I identify the gaps in my understanding, which makes it easy for me to fill them in. By sharing knowledge in well-informed blog posts, anyone can help teach themselves, with the benefit to others as a happy side-effect.


This is the final benefit to sharing knowledge, and it's one that I used when writing this blog post. I had a few ideas about the benefits of sharing, but I wanted to verify those ideas and to get some more.

I was able to just ask Twitter what they thought and get people to give me their ideas, for free. Why would anyone answer some asshole on Twitter? Well, there is a concept in evolutionary biology called Reciprocal Altruism. The idea is simple: you scratch my back, I'll scratch yours.

People who know me know that I share knowledge, and are more likely to share knowledge with me. So the next time I need ideas for a blog post, or a Stack Overflow question answered, or a GitHub issue clarified, I can rely on that social support network. Cool.


I've laid out the four main selfish reasons that it makes sense for you to share what you learn. Of course, not only does it help you when you share, it helps everyone. And if everyone gets better at this software development thingy, you'll get better, too. Rising tides lift all boats, after all.

I've been a long-time advocate for sharing what we learn, while we learn it. The fact is that at the very moment you acquire some piece of knowledge, you have a unique state of mind. You are undergoing the mental process of transitioning from ignorance to understanding and, I believe, are uniquely qualified in that moment to teach others what you have just learned. You remember the exact state of mind you had before it "clicked" and can share the mental process that lead to that revelation. Every developer out there should have a blog where they write about things that they – until very recently – did not understand.

/Ash Furrow /Comments Off