Swift - Child of the LLVM

This post is the second in a series on the LLVM (Apple’s compilation tool chain pioneered by Chris Lattner). The first post spoke about the compilation project at a high level, broadly outlining it’s history and the reasons Apple adopted and developed it.

With this article, I want to look more in depth at how the LLVM enabled Swift to advance beyond Objective-C and what benefits this has brought.

Swift came about as direct result of Apple’s adoption and advancement of the LLVM compiler. See the first article in this series for more details on this. We’ll review briefly here for context!

LLVM Project

The LLVM compiler project was started by Chris Lattner at the University of Illinois. Apple became interested in it for both it’s ‘language-agnostic’ design but also the potential to more easily compile to many target architectures.

The benefits brought by the LLVM did allow Apple to progress Objective C and it’s development toolset. However, in 2010 the LLVM reached a point where it could support more features than could be added to Objective C. This is when development of Swift started.

LLVM Advances & Obj-C

While Apple was adopting the LLVM/Clang and using it to move Obj-C forward we saw significant improvements. Notable were:

LLVM Advances & Swift

A primary motivation for Swift was definitely to piggy-back and continue to utilize the advancements of the LLVM toolchain. It’s notable that there also were other great reasons to introduce a new more accessible language. New developer ramp-up for one. Obj-C is not the easiest first progamming language to master.

Probably feature objectives with Swift development:


With Swift, the compilation team decided to introduce a new intermediary language to enable both language features and improved compilation. The flexibility of LLVM allowed them to do this and slot it right into the existing compilation flow

A huge source of info on this was a talk given at the 2015 LLVM Developers’ Meeting - Swift’s High-Level IR: A Case Study…. Check this out. It’s fascinating.


As stated in the talk linked above, SIL enables a wider gap between source (Swift) semantics & IR semantics during compilation. This has a few notable benefits:

  • Language evolution - SIL allows Swift language writers to write more of the language in the language
  • Safety - SIL allows for compiler errors on things like uninitialize vars and unreachable code
  • Generics - a really interesting example of compilation strategy impacting language features
    • SIL allows for a generics model that supports dynamic dispatch & separate compilation. This is instead of being dependent on template instantiation like Obj-C & C++… which uses a runtime strategy that can slow execution down.
    • In more simpler language, dynamic dispatch & separate compilation is a strategy that uses in-lining of concrete type function definition at compile time. This leaves the impact of generics at runtime basically non-existent. Check out (this page)[https://swift.org/blog/whole-module-optimizations/] on how it’s done.
    • FYI this is called function specialization.

Summing it Up

It’s super relevant as an engineer following the Swift language to understand how it’s design was so hugely related to advancements made to the LLVM during the period Apple was championing this toolchain. It not only gives great context behind the language features we’ve seen rolled out so far.. but provides indication for what may lie ahead for this still young language!

Understanding The LLVM - Introduction

It’s a common thing, as software engineers, to have a tendency to shy away from understanding compilation. At a high level, you know the compiler is turning your code into machine code the target computer (running your program) can execute. However, it’s a powerful thing to understand compilation at a deeper level.

Specifically as a Swift developer, it’s fascinating to know that without the earlier development of Swift’s compiler, the LLVM.. the language itself likely wouldn’t have been developed.

This article will look at what the LLVM is and the history of it’s development and adoption by Apple.

What is the LLVM

LLVM is an umbrella project for many subprojects. All of which result in a compiler infrastructure and tool chain used today largely by developers using C/C++ based languages and which is heavily integrated in Xcode and it’s compilation process.

Of note, and this will be explored later on, the LLVM was developed to be an alternative to the most widely used compilation toolchain of the time, the GCC (GNU Compiler Collection). Comparison of these two options has been heavy over the years of LLVM’s growth and development and they both remain viable options for certain languages in certain scenarios. We will talk here about what sets the LLVM apart and why it’s now the dedicated compiler toolchain strategy for Apple and subsequently iOS development.

A Compilation Tool Chain

Any set of compilation tools (such as the LLVM and it’s sub-projects) follow a similar flow for compiling source code to machine code and then handing the result off to a process for linking and generating an executable.

The LLVM Pieces


A compiler’s frontend converts source code to an intermediary language (IR) that can then be handed on to the next stages of compilation… or used by an IDE for warnings/errors or other types of feedback.

  • In Obj-c, Clang was developed to be LLVM’s go-to frontend compiler.. allowing the language to be be progressed beyond what was possible with GCC.
  • In Swift, a custom front-end compiler was developed.


Optimizations are things the compiler does at runtime to speed up exection or in general increase performance in someway. Reduce footprint, inlining code, etc..

For the LLVM, it’s optimizations have been something that has set it apart. Now, in many use cases, it surpasses GCC in speed and other benchmarks.

Development feedback tools

LLDB is a great example here. This is native debugger that is fast and much more memory efficient than it’s counterpart in GCC, the GDB.

These type of tools exist outside the compilation flow.. but often build on the same tools. LLDB, for example, uses the source code analysis in Clang.


After the front-end has converted the source language to an Intermediate Representation (IR), and this has gone through optimization, a compiler’s backend generates the code that will actually be executable by the target machine’s architecture and CPU.

The LLVM’s capability here is likely a strong reason for Apple’s support and adoption. It uses a target-independent code generation that is capable of creating output for several types of target CPUs — including X86, PowerPC, ARM, and SPARC. Useful for a company building software that will run on so many different hardware devices.

Linking tools

Won’t go into these too much for LLVM. Just know that linking is one of the last stages of ‘building’. It does happen post compilation and will usually raise errors if you’ve got duplicate definitions across multiple source code files.

Where the Power is

Most compilation toolchains, including GCC, break things into a front-end, middle section and back-end. This brings great flexibility. LLVM went further in terms of modularity and reuseablity.

  • If you want support for a new language, you write a front-end lexer to convert you source language to LLVM IR code.
  • If you need speed given a certain source code size, you could incorporate a new optimizer.
  • If you need your code to run a specific target architecture not supported you could write your own backend
  • Many of the LLVM projects can also be plugged into GCC.

Apple & LLVM History

The LLVM compiler project was not started at Apple but University of Illinois by Chris Lattner (that guy) and a professor there, Vikram Adve.

It was originally implemented to compile C & C++, but was created with a ‘language-agnostic’ design in mind… this caught the eye of Apple they brought Lattner, his project and it’s development in-house in 2005. Though it appears it was not immediately invested in, Lattner spent his own time advancing the project until he was able to demonstrate it’s value and convince Apple to invest a team in it. It was further advanced and over time became integral to Apple’s development toolset… slowly replacing the previously used GCC compiler and many of the low-level tools Apple used across it’s development.

The benefits brought by the LLVM allowed Apple to progress Objective C and Xcode and much of the performance and potential of their low-level tools.

In 2010, it seems the LLVM reached a point where it could support more features than could be added to Objective C. Lattner apparently began working on Swift at this point. The framework laid by the advancements to the LLVM, Obj-C and Apple toolset seem to have been foundational in the direction Swift would go.

“We simplified memory management with Automatic Reference Counting (ARC). Our framework stack, built on the solid base of Foundation and Cocoa, has been modernized and standardized throughout. Objective-C itself has evolved to support blocks, collection literals, and modules, enabling framework adoption of modern language technologies without disruption. Thanks to this groundwork, we can now introduce a new language for the future of Apple software development.” – Chris Lattener

Timeline of LLVM history at Apple


This article should have given a good idea of the LLVM specifically around:

  • What the it is from a high level
  • What it’s most fundamental modular projects are
  • Why, how and when Apple adopted it
  • Where it’s power lies

One interesting next step in learning about the LLVM and it’s use would be to look at how the Swift front-end compiler was developed and how it fits into the LLVM toolchain. Fun!

I’ve been building iPhone apps for almost 6 years. In the last few I’ve primarily focused on engineering ones that interface to custom-built hardware device. You can think of a smart watch or maybe a smart thermostat as examples of smart devices.

I want to talk about what changes when this is your paradigm. I will argue that you not only need to worry about the standard software development concerns:

  • A delightful user experience
  • Logical architecture
  • Solid/maintainable codebases

You also need to consider how your off-device software creates visibility into what’s happening during not only development, but field testing and customer use. This creates a whole new dimension to non-functional requirements.

Also, let’s also be clear.. most times we really need visility are when things are failing.


Let’s start with considering a real world example.

The Grow planter.

I currently work at Grow and much of my understanding of visibility and hardware comes from my experience working on their planter and the software that supports it. Grow’s purpose is to bring the joy of gardening to people who don’t have experience in gardening or perhaps the time to always be checking in on it. Our smart planter has integrated sensors that not only drive an automated watering system.. but allow us to provide customized feedback about what’s happening with the user’s crops.

What does this mean from a technical stand-point?

1. We need to know what plants the user has planted.
2. The planter's sensors are constantly taking readings. We need to get these off the device and onto our servers for analysis.
3. We need to be able to instruct the planter about custom watering based on the sensor records and individual plant needs.

What about Visibility?

Let’s circle back to the argument that building software for hardware devices means you have to think more about different kinds of visiblity.

All components under development

While engineering a hardware product… all aspects are likely being engineered at the same time. This includes:

Hardware - The boards, circuitry

Mechanical - the physical system the hardware is sitting it.. sometimes just as protection.. but other times integrating other physical components. As with Grow, our physical system holds our hardware.. but is also designed to hold soil and plants and hook up to a water line.

Firmware - The software that runs on the hardware board.. handling communication between the components connected to it by mechanical.

  • Usually written in C…
  • When building a BT driven system, the firmware is receiving commands from the app, and either responding with information requested or initiated a received instruction.

Returning to the idea that visibility is required most when failures occur, given that a smart device being engineered has all of these components changing at any given time… we have a great many new points of failure.

Our Stack

Every stack is different. However, the majority of smart devices connect to users and the outside world through a single gateway. Many (like Grow) use Bluetooth and rely on the user’s smart phone to be this interface. At Grow, our mobile apps do act as our single communication gateway retrieving sensor records from the device and sending any instructions for operation. The phone acts as a conduit with all data ending up at and all instructions ultimately coming from our cloud servers.

The Grow stack.

How does Grow’s software stack provide visibility?

We build visibility into systems largely for helping out when things go wrong. My argument is that when you engineer a software stack that supports a custom hardware device you have more failure points to consider. Essentially, as many potential failure points as possible on the hardware device itself need to be able to communicated off the device and tracked by your software.

At Grow, we tackle this in a few ways.

Robust logging

It’s important that it can be used during lab and field testing and exported off the mobile device during user testing.

Granular controls

Often events that user may trigger through the consumer-facing interface are not granular enough. You may need to build controls that are used specifically for lab & field testing.

Swift is widely known and loved by programmers for different things. One of these are it’s Collections and the Sequence type that comes with it. Sequence is not just for Swift standard library collections, however. It can also, as a protocol, be applied to really any struct or class that wants to behave like a collection in Swift and implement the Sequence methods.

The most commonly used thing Sequence brings us is the abilty to use for-in loops.

This allows us to write things like:

let numbers = 1...3
for number in numbers {
// Prints "1"
// Prints "2"
// Prints "3"

Without Sequence, we’d end up writing something like this:

let numbers = [1, 2, 3]

let index = numbers.count
var i = 0;
while i < index {
    i += 1

So Sequence brings us for-in loops. But it turns out these are really at the root of everything we get with the Sequence type. Here are some of it’s more frequently used functions:

  • func dropFirst(_:)
  • func dropLast(_:)
  • func filter(_:)
  • func forEach(_:)
  • func map(_:)
  • func split(_:omittingEmptySubsequences:whereSeparator:)
  • func contains(where:)
  • func first(where:)
  • func reduce(::)
  • func reversed()
  • func sorted(by:)

Looking at these.. you can implement all of these using a for-in loop. Consider the implementation for contains(:).

func contains(e: Element) -> Bool {
  var elementFound = false
  for item in elements {
    if e == item {
      elementFound = true
  return elementFound

It pretty uncomplicated stuff right? So, if you wanted to create your own custom collection and get the power of Sequence and all these useful methods. How do you do it? It’s fundamentally based in allowing the iteration of a for-in loop.

Let’s look at doing this for a pretty basic data structure, the linked list. So, a linked list is a set of instances that each have a reference to the next element in the set.

You know when you've reached the end because the reference to next is nil

Fundamentally, to implement a linked list, you’re going to need a representation of a link and something to represent the list. If we want to add Sequence type functionality… we’ll create both those classes as we would typically, but the list itself needs to implement both the Sequence and IteratorProtocol protocols. All we need to implement here is the next() function.

class Link<T> {
    let value: T
    let nextLink: Link<T>?
    init(_ value: T, next: Link<T>?){
        self.value = value
        self.nextLink = next

class LinkedLink<T> : Sequence, IteratorProtocol {
    var currentNode : Link<T>?
    init(head: Link<T>) {
        currentNode = head
    func next() -> T? {
        if let next = currentNode?.nextLink {
            currentNode = next
            return next.value
        return nil

let elementFour = Link(4, next: nil)
let elementThree = Link(3, next: elementFour)
let elementTwo = Link(2, next: elementThree)
let elementOne = Link(1, next: elementTwo)

let linkedList = LinkedLink(head: elementOne)

for item in linkedList {

// prints 2
// prints 3
// prints 4

linkedList.contains(2)  //true

Notably, with this linked list example.. we don’t get the first element, or the head, printed. This is down to the fact that our implementation of next(), when called for the first time returns the second link.. which we then print. This could be solved by introducing a subclass of Link to represent the head and then to instantiate the LinkedList with this.

Summing it Up

So, the Sequence protocol is pretty simple and easy to add to a custom collection object. It brings a suite of great functions for your use. What you’re essentially setting up when you implement the Sequence and IteratorProtocol protocols is the instruction for enumeration.. which is the functionality at the core of all functions provided by Sequence. Well done Swift writers!

This is a continuation in the series around functional programming concepts.

Today: Currying

As much as it sounds like a trip to London’s Brick Lane (curry capital of the western world?)… this is not about vindaloo!

Currying is essentially when you break down a function taking multiple arguments into a series of functions. Each of these then taking 1 or more of the arguments.

Consider the following function for adding two values.

func add(a: Int, b: Int) -> Int {
    return a + b

add(a: 3, b: 4) //returns 7

You would turn this into a currying function as follows.

func add(a: Int) -> ((_ b: Int) -> Int) {
    return { (b: Int) -> Int in
        return a + b

let v = add(a: 3)(6)  //returns 9

var add3 = add(a: 3)

add3(4) //returns 7

The value is in being able to provide the first value.. store the function holding it and it’s subsequent behavior.. and then call it later with the remaining values.

There is some potential danger in readability and could be confusion with what the state of the existing values are. But there it is.