This is an introductory explanation of the Swift 3 programming language as it relates to practical real-life iOS programming, from my book, iOS 10 Programming Fundamentals with Swift. Copyright 2016 Matt Neuburg. Please note that this edition is outdated; the current books are iOS 15 Programming Fundamentals with Swift and Programming iOS 14. If my work has been of help to you, please consider purchasing one or both of them, or you can reward me through PayPal at http://www.paypal.me/mattneub. Thank you!
In the preceding chapter, I discussed some built-in object types. But I have not yet explained object types themselves. As I mentioned in Chapter 1, Swift object types come in three flavors: enum, struct, and class. What are the differences between them? And how would you create your own object type? That’s what this chapter is about.
I’ll describe object types in general, and then each of the three flavors. Then I’ll explain three Swift ways of giving an object type greater flexibility: protocols, generics, and extensions. Finally, the survey of Swift’s built-in types will conclude with three umbrella types and three collection types.
Object types are declared with the flavor of the object type (enum
, struct
, or class
), the name of the object type (which should start with a capital letter), and curly braces:
class Manny { } struct Moe { } enum Jack { }
The visibility (scope), and hence the usability, of an object type by other code depends upon where its declaration appears:
Declarations for any object type may contain within their curly braces the following things:
A variable declared at the top level of an object type declaration is a property. By default, it is an instance property. An instance property is scoped to an instance: it is accessed through a particular instance of this type, and its value can be different for every instance of this type.
Alternatively, a property can be a static/class property. For an enum or struct, it is declared with the keyword static
; for a class, it may instead be declared with the keyword class
. Such a property belongs to the object type itself: it is accessed through the type, and it has just one value, associated with the type.
A function declared at the top level of an object type declaration is a method. By default, it is an instance method: it is called by sending a message to a particular instance of this type. Inside an instance method, self
is the instance.
Alternatively, a method can be a static/class method. For an enum or struct, it is declared with the keyword static
; for a class, it may be declared instead with the keyword class
. It is called by sending a message to the type. Inside a static/class method, self
is the type.
An initializer is a function called in order to bring an instance of an object type into existence. Strictly speaking, it is a static/class method, because it is called by talking to the object type. It is usually called using special syntax: the name of the type is followed directly by parentheses, as if the type itself were a function. When an initializer is called, a new instance is created and returned as a result. You will usually do something with the returned instance, such as assigning it to a variable, in order to preserve it and work with it in subsequent code.
For example, suppose we have a Dog class:
class Dog { }
Then we can make a Dog instance like this:
Dog()
That code, however, though legal, is silly — so silly that it warrants a warning from the compiler. We have created a Dog instance, but there is no reference to that instance. Without such a reference, the Dog instance comes into existence and then immediately vanishes in a puff of smoke. The usual sort of thing is more like this:
let fido = Dog()
Now our Dog instance will persist as long as the variable fido
persists (see Chapter 3) — and the variable fido
gives us a reference to our Dog instance, so that we can use it.
Observe that Dog()
calls an initializer even though our Dog class doesn’t declare any initializers! The reason is that object types may have implicit initializers. These are a convenience that save you the trouble of writing your own initializers. But you can write your own initializers, and you will often do so.
An initializer is a kind of function, and its declaration syntax is rather like that of a function. To declare an initializer, you use the keyword init
followed by a parameter list, followed by curly braces containing the code. An object type can have multiple initializers, distinguished by their parameters. A frequent use of the parameters is to set the values of instance properties.
For example, here’s a Dog class with two instance properties, name
(a String) and license
(an Int). We give these instance properties default values that are effectively placeholders — an empty string and the number zero. Then we declare three initializers, so that the caller can create a Dog instance in three different ways: by supplying a name, by supplying a license number, or by supplying both. In each initializer, the parameters supplied are used to set the values of the corresponding properties:
class Dog { var name = "" var license = 0 init(name:String) { self.name = name } init(license:Int) { self.license = license } init(name:String, license:Int) { self.name = name self.license = license } }
Observe that in that code, in each initializer, I’ve given each parameter the same name as the property to which it corresponds. There’s no reason to do that apart from stylistic clarity. In the code for each initializer, I can distinguish the parameter from the property by using self
to access the property.
The result of that declaration is that I can create a Dog in three different ways:
let fido = Dog(name:"Fido") let rover = Dog(license:1234) let spot = Dog(name:"Spot", license:1357)
What I can’t do is to create a Dog with no initializer parameters. I wrote initializers, so my implicit initializer went away. This code is no longer legal:
let puff = Dog() // compile error
Of course, I could make that code legal by explicitly declaring an initializer with no parameters:
class Dog { var name = "" var license = 0 init() { } init(name:String) { self.name = name } init(license:Int) { self.license = license } init(name:String, license:Int) { self.name = name self.license = license } }
Now, the truth is that we don’t need those four initializers, because an initializer is a function, and a function’s parameters can have default values. Thus, I can condense all that code into a single initializer, like this:
class Dog { var name = "" var license = 0 init(name:String = "", license:Int = 0) { self.name = name self.license = license } }
I can still make an actual Dog instance in four different ways:
let fido = Dog(name:"Fido") let rover = Dog(license:1234) let spot = Dog(name:"Spot", license:1357) let puff = Dog()
Now comes the really interesting part. In my property declarations, I can eliminate the assignment of default initial values (as long as I declare explicitly the type of each property):
class Dog { var name : String // no default value! var license : Int // no default value! init(name:String = "", license:Int = 0) { self.name = name self.license = license } }
That code is legal (and common) — because an initializer initializes! In other words, I don’t have to give my properties initial values in their declarations, provided I give them initial values in all initializers. That way, I am guaranteed that all my instance properties have values when the instance comes into existence, which is what matters. Conversely, an instance property without an initial value when the instance comes into existence is illegal. A property must be initialized either as part of its declaration or by every initializer, and the compiler will stop you otherwise.
The Swift compiler’s insistence that all instance properties be properly initialized is a valuable feature of Swift. (Contrast Objective-C, where instance properties can go uninitialized — and often do, leading to mysterious errors later.) Don’t fight the compiler; work with it. The compiler will help you by giving you an error message (“Return from initializer without initializing all stored properties”) until all your initializers initialize all your instance properties:
class Dog { var name : String var license : Int init(name:String = "") { self.name = name // compile error } }
Because setting an instance property in an initializer counts as initialization, it is legal even if the instance property is a constant declared with let
:
class Dog { let name : String let license : Int init(name:String = "", license:Int = 0) { self.name = name self.license = license } }
In our artificial examples, we have been very generous with our initializer: we are letting the caller instantiate a Dog without supplying a name:
argument or a license:
argument. Usually, however, the purpose of an initializer is just the opposite: we want to force the caller to supply all needed information at instantiation time. Thus, in real life, it is much more likely that our Dog class would look like this:
class Dog { let name : String let license : Int init(name:String, license:Int) { self.name = name self.license = license } }
In that code, our Dog has a name
property and a license
property, and values for these must be supplied at instantiation time (there are no default values), and those values can never be changed thereafter (these properties are constants). In this way, we enforce a rule that every Dog must have a meaningful name and license. There is now only one way to make a Dog:
let spot = Dog(name:"Spot", license:1357)
Sometimes, there is no meaningful default value that can be assigned to an instance property during initialization. For example, perhaps the initial value of this property will not be obtained until some time has elapsed after this instance has come into existence. This situation conflicts with the requirement that all instance properties be initialized either in their declaration or through an initializer. You could, of course, just circumvent the problem by assigning a default initial value anyway; but this fails to communicate to your own code the fact that this isn’t a “real” value.
A sensible and common solution, as I explained in Chapter 3, is to declare your instance property as a var
having an Optional type. An Optional has a value, namely nil
, signifying that no “real” value has been supplied; and an Optional var
is initialized to nil
automatically. Thus, your code can test this instance property against nil
and, if it is nil
, it won’t use the property. Later, the property will be given its “real” value. Of course, that value is now wrapped in an Optional; but if you declare this property as an implicitly unwrapped Optional, you can use the wrapped value directly, without explicitly unwrapping it — as if this weren’t an Optional at all — once you’re sure it is safe to do so:
// this property will be set automatically when the nib loads @IBOutlet var myButton: UIButton! // this property will be set after time-consuming gathering of data var albums : [MPMediaItemCollection]!
Except in order to set an instance property, an initializer may not refer to self
, explicitly or implicitly, until all instance properties have been initialized. This rule guarantees that the instance is fully formed before it is used. This code, for example, is illegal:
struct Cat { var name : String var license : Int init(name:String, license:Int) { self.name = name meow() // too soon - compile error self.license = license } func meow() { print("meow") } }
The call to the instance method meow
is implicitly a reference to self
— it means self.meow()
. The initializer can say that, but not until it has fulfilled its primary contract of initializing all uninitialized properties. The call to the instance method meow
simply needs to be moved down one line, so that it comes after both name
and license
have been initialized.
Initializers within an object type can call one another by using the syntax self.init(...)
. An initializer that calls another initializer is called a delegating initializer. When an initializer delegates, the other initializer — the one that it delegates to — must completely initialize the instance first, and then the delegating initializer can work with the fully initialized instance, possibly setting again a var
property that was already set by the initializer that it delegated to.
A delegating initializer appears to be an exception to the rule against saying self
too early. But it isn’t, because it is saying self
in order to delegate — and delegating will cause all instance properties to be initialized. In fact, the rules about a delegating initializer saying self
are even more stringent: a delegating initializer cannot refer to self
, not even to set a property, until after the call to the other initializer. For example:
struct Digit { var number : Int var meaningOfLife : Bool init(number:Int) { self.number = number self.meaningOfLife = false } init() { // this is a delegating initializer self.init(number:42) self.meaningOfLife = true } }
Moreover, a delegating initializer cannot set an immutable property (a let
variable) at all. That is because it cannot refer to the property until after it has called the other initializer, and at that point the instance is fully formed — initialization proper is over, and the door for initialization of immutable properties has closed. Thus, the preceding code would be illegal if meaningOfLife
were declared with let
, because the second initializer is a delegating initializer and cannot set an immutable property.
Be careful not to delegate recursively! If you tell an initializer to delegate to itself, or if you create a vicious circle of delegating initializers, the compiler won’t stop you (I regard that as a bug), but your running app will hang. For example, don’t say this:
struct Digit { // do not do this! var number : Int = 100 init(value:Int) { self.init(number:value) } init(number:Int) { self.init(value:number) } }
An initializer can return an Optional wrapping the new instance. In this way, nil
can be returned to signal failure. An initializer that behaves this way is a failable initializer. To mark an initializer as failable when declaring it, put a question mark after the keyword init
. If your failable initializer needs to return nil
, explicitly write return nil
. It is up to the caller to test the resulting Optional for equivalence with nil
, unwrap it, and so forth, as with any Optional.
Here’s a version of Dog with an initializer that returns an Optional, returning nil
if the name:
or license:
arguments are invalid:
class Dog { let name : String let license : Int init?(name:String, license:Int) { if name.isEmpty { return nil } if license <= 0 { return nil } self.name = name self.license = license } }
The resulting value is typed as an Optional wrapping a Dog, and the caller will need to unwrap that Optional (if isn’t nil
) before sending any messages to it.
Cocoa and Objective-C conventionally return nil
from initializers to signal failure; the API for such initializers has been hand-tweaked as a Swift failable initializer if initialization really might fail. For example, the UIImage initializer init?(named:)
is a failable initializer, because there might be no image with the given name. It is not implicitly unwrapped, so the resulting value is a UIImage?
, and will typically have to be unwrapped before using it. (Most Objective-C initializers, however, are not bridged as failable initializers, even though in theory any Objective-C initializer might return nil
.)
A property is a variable — one that happens to be declared at the top level of an object type declaration. This means that everything said about variables in Chapter 3 applies. A property has a fixed type; it can be declared with var
or let
; it can be stored or computed; it can have setter observers. An instance property can also be declared lazy
.
A stored instance property must be given an initial value. But, as I explained a moment ago, this doesn’t have to be through assignment in the declaration; it can be through an initializer instead. Setter observers are not called during initialization of properties.
Code that initializes a property cannot fetch an instance property or call an instance method. Such behavior would require a reference, explicit or implicit, to self
; and during initialization, there is no self
yet — self
is exactly what we are in the process of initializing. Making this mistake can result in some of Swift’s most perplexing compile error messages. For example, this is illegal (and removing the explicit references to self
doesn’t make it legal):
class Moi { let first = "Matt" let last = "Neuburg" let whole = self.first + " " + self.last // compile error }
One solution in that situation would be to make whole
a computed property:
class Moi { let first = "Matt" let last = "Neuburg" var whole : String { return self.first + " " + self.last } }
That’s legal because the computation won’t actually be performed until after self
exists. Another solution is to declare whole
as lazy
:
class Moi { let first = "Matt" let last = "Neuburg" lazy var whole : String = self.first + " " + self.last }
Again, that’s legal because the reference to self
won’t be performed until after self
exists. Similarly, a property initializer can’t call an instance method, but a computed property can, and so can a lazy
property.
As I demonstrated in Chapter 3, a variable’s initializer can consist of multiple lines of code if you write it as a define-and-call anonymous function. If this variable is an instance property, and if that code is to refer to other instance properties or instance methods, the variable must be declared lazy
:
class Moi { let first = "Matt" let last = "Neuburg" lazy var whole : String = { var s = self.first s.append(" ") s.append(self.last) return s }() }
If a property is an instance property (the default), it can be accessed only through an instance, and its value is separate for each instance. For example, let’s start once again with a Dog class:
class Dog { let name : String let license : Int init(name:String, license:Int) { self.name = name self.license = license } }
Our Dog class has a name
instance property. Then we can make two different Dog instances with two different name
values, and we can access each Dog instance’s name
through the instance:
let fido = Dog(name:"Fido", license:1234) let spot = Dog(name:"Spot", license:1357) let aName = fido.name // "Fido" let anotherName = spot.name // "Spot"
A static/class property, on the other hand, is accessed through the type, and is scoped to the type, which usually means that it is global and unique. I’ll use a struct as an example:
struct Greeting { static let friendly = "hello there" static let hostile = "go away" }
Now code elsewhere can fetch the values of Greeting.friendly
and Greeting.hostile
. That example is neither artificial nor trivial; immutable static properties are a convenient and effective way to supply your code with nicely namespaced constants.
Unlike instance properties, static properties can be initialized with reference to one another; the reason is that static property initializers are lazy (see Chapter 3):
struct Greeting { static let friendly = "hello there" static let hostile = "go away" static let ambivalent = friendly + " but " + hostile }
Notice the lack of self
in that code. In static/class code, self
means the type itself. I like to use self
explicitly wherever it would be implicit, but here I can’t use it without arousing the ire of the compiler (I regard this as a bug). To clarify the status of the terms friendly
and hostile
, I can use the name of the type, just as any other code would do:
struct Greeting { static let friendly = "hello there" static let hostile = "go away" static let ambivalent = Greeting.friendly + " but " + Greeting.hostile }
On the other hand, if I write ambivalent
as a computed property, I can use self
:
struct Greeting { static let friendly = "hello there" static let hostile = "go away" static var ambivalent : String { return self.friendly + " but " + self.hostile } }
On the other other hand, I’m not allowed to use self
when the initial value is set by a define-and-call anonymous function (again, I regard this as a bug):
struct Greeting { static let friendly = "hello there" static let hostile = "go away" static var ambivalent : String = { return self.friendly + " but " + self.hostile // compile error }() }
A method is a function — one that happens to be declared at the top level of an object type declaration. This means that everything said about functions in Chapter 2 applies.
By default, a method is an instance method. This means that it can be accessed only through an instance. Within the body of an instance method, self
is the instance. To illustrate, let’s continue to develop our Dog class:
class Dog { let name : String let license : Int let whatDogsSay = "Woof" init(name:String, license:Int) { self.name = name self.license = license } func bark() { print(self.whatDogsSay) } func speak() { self.bark() print("I'm \(self.name)") } }
Now I can make a Dog instance and tell it to speak:
let fido = Dog(name:"Fido", license:1234) fido.speak() // Woof I'm Fido
In my Dog class, the speak
method calls the instance method bark
by way of self
, and obtains the value of the instance property name
by way of self
; and the bark
instance method obtains the value of the instance property whatDogsSay
by way of self
. This is because instance code can use self
to refer to this instance. Such code can omit self
if the reference is unambiguous; thus, for example, I could have written this:
func speak() { bark() print("I'm \(name)") }
But I never write code like that (except by accident). Omitting self
, in my view, makes the code harder to read and maintain; the loose terms bark
and name
seem mysterious and confusing. Moreover, sometimes self
cannot be omitted. For example, in my implementation of init(name:license:)
, I must use self
to disambiguate between the parameter name
and the property self.name
.
A static/class method is accessed through the type, and self
means the type. I’ll use our Greeting struct as an example:
struct Greeting { static let friendly = "hello there" static let hostile = "go away" static var ambivalent : String { return self.friendly + " but " + self.hostile } static func beFriendly() { print(self.friendly) } }
And here’s how to call the static beFriendly
method:
Greeting.beFriendly() // hello there
There is a kind of conceptual wall between static/class members, on the one hand, and instance members on the other; even though they may be declared within the same object type declaration, they inhabit different worlds. A static/class method can’t refer to “the instance” because there is no instance; thus, a static/class method cannot directly refer to any instance properties or call any instance methods. An instance method, on the other hand, can refer to the type by name, and can thus access static/class properties and can call static/class methods.
For example, let’s return to our Dog class and grapple with the question of what dogs say. Presume that all dogs say the same thing. We’d prefer, therefore, to express whatDogsSay
not at instance level but at class level. This would be a good use of a static property. Here’s a simplified Dog class that illustrates:
class Dog { static var whatDogsSay = "Woof" func bark() { print(Dog.whatDogsSay) } }
Now we can make a Dog instance and tell it to bark:
let fido = Dog() fido.bark() // Woof
(I’ll talk later in this chapter about another way in which an instance method can refer to the type.)
A subscript is an instance method that is called in a special way — by appending square brackets to an instance reference. The square brackets can contain arguments to be passed to the subscript method. You can use this feature for whatever you like, but it is suitable particularly for situations where this is an object type with elements that can be appropriately accessed by key or by index number. I have already described (in Chapter 3) the use of this syntax with strings, and it is familiar also from dictionaries and arrays; you can use square brackets with strings and dictionaries and arrays exactly because Swift’s String and Dictionary and Array types declare subscript methods.
The syntax for declaring a subscript method is somewhat like a function declaration and somewhat like a computed property declaration. That’s no coincidence! A subscript is like a function in that it can take parameters: arguments can appear in the square brackets when a subscript method is called. A subscript is like a computed property in that the call is used like a reference to a property: you can fetch its value or you can assign into it.
To illustrate, I’ll write a struct that treats an integer as if it were a digit sequence, returning a digit that can be specified by an index number in square brackets; for simplicity, I’m deliberately omitting any sort of error-checking:
struct Digit { var number : Int init(_ n:Int) { self.number = n } subscript(ix:Int) -> Int { ❶ ❷ get { ❸ let s = String(self.number) return Int(String(s[s.index(s.startIndex, offsetBy:ix)]))! } } }
Here’s an example of calling the getter; the instance with appended square brackets containing the arguments is used just as if you were getting a property value:
var d = Digit(1234) let aDigit = d[1] // 2
Now I’ll expand my Digit struct so that its subscript method includes a setter (and again I’ll omit error-checking):
struct Digit { var number : Int init(_ n:Int) { self.number = n } subscript(ix:Int) -> Int { get { let s = String(self.number) return Int(String(s[s.index(s.startIndex, offsetBy:ix)]))! } set { var s = String(self.number) let i = s.index(s.startIndex, offsetBy:ix) s.replaceSubrange(i...i, with: String(newValue)) self.number = Int(s)! } } }
And here’s an example of calling the setter; the instance with appended square brackets containing the arguments is used just as if you were setting a property value:
var d = Digit(1234) d[0] = 2 // now d.number is 2234
An object type can declare multiple subscript methods, distinguished by their parameters.
An object type may be declared inside an object type declaration, forming a nested type:
class Dog { struct Noise { static var noise = "Woof" } func bark() { print(Dog.Noise.noise) } }
A nested object type is no different from any other object type, but the rules for referring to it from the outside are changed; the surrounding object type acts as a namespace, and must be referred to explicitly in order to access the nested object type:
Dog.Noise.noise = "Arf"
Here, the Noise struct is namespaced inside the Dog class. This namespacing provides clarity: the name Noise does not float free, but is explicitly associated with the Dog class to which it belongs. Namespacing also allows more than one Noise struct to exist, without any clash of names. Swift built-in object types often take advantage of namespacing; for example, the String struct is one of several structs that contain an Index struct, with no clash of names.
On the whole, the names of object types will be global, and you will be able to refer to them simply by using their names. Instances, however, are another story. Instances must be deliberately created, one by one. That is what instantiation is for. Once you have created an instance, you can cause that instance to persist, by storing the instance in a variable with sufficient lifetime; using that variable as a reference, you can send instance messages to that instance, accessing instance properties and calling instance methods.
Instantiation may involve you calling an initializer, or some other object may create or provide the instance for you. A simple example is what happens when you manipulate a String, like this:
let s = "Hello, world" let s2 = s.uppercased()
In that code, we end up with two String instances. The first one, s
, we created using a string literal. The second one, s2
, was created for us when we called the first string’s uppercased
method. Thus we have two instances, and they will persist independently as long as our references to them persist; but we didn’t get either of them by calling an initializer.
In other cases, the instance you are interested in will already exist in some persistent fashion; the problem will then be to find a way of getting a reference to that instance.
Let’s say, for example, that this is a real-life iOS app. You will certainly have a root view controller, which will be an instance of some type of UIViewController. Let’s say it’s an instance of the ViewController class. Once your app is up and running, this instance already exists. It would then be utterly counterproductive to attempt to speak to the root view controller by instantiating the ViewController class:
let theVC = ViewController() // legal but stupid
All that code does is to make a second, different instance of the ViewController class, and your messages to that instance will be wasted, as it is not the particular already existing instance that you wanted to talk to. That is a very common beginner mistake; don’t make it.
Getting a reference to an already existing instance can be, of itself, an interesting problem. The process always starts with something you already have a reference to. Often, this will be a class. In iOS programming, the app itself is an instance, and there is a class that holds a reference to that instance and will hand it to you whenever you ask for it. That class is the UIApplication class, and the way to get a reference to the app instance is through its shared
class property:
let app = UIApplication.shared
Now we have a reference to the application instance. The application instance has a keyWindow
property:
let window = app.keyWindow
Now we have a reference to our app’s key window. That window owns the root view controller, and will hand us a reference to it, as its own rootViewController
property; the app’s keyWindow
is an Optional, so to get at its rootViewController
we must unwrap the Optional:
let vc = window?.rootViewController
And voilà, we have a reference to our app’s root view controller. To obtain the reference to this persistent instance, we created, in effect, a chain of method calls and properties leading from the known to the unknown, from a globally available class to the particular desired instance:
let app = UIApplication.shared let window = app.keyWindow let vc = window?.rootViewController
Clearly, we can write that chain as an actual chain, using repeated dot-notation:
let vc = UIApplication.shared.keyWindow?.rootViewController
You don’t have to chain your instance messages into a single line — chaining through multiple let
assignments is completely efficient, possibly more legible, and certainly easier to debug — but it’s a handy formulaic convenience and is particularly characteristic of dot-notated object-oriented languages like Swift.
An enum is an object type whose instances represent distinct predefined alternative values. Think of it as a list of known possibilities. An enum is the Swift way to express a set of constants that are alternatives to one another. An enum declaration includes case statements. Each case is the name of one of the alternatives. An instance of an enum will represent exactly one alternative — one case.
For example, in my Albumen app, different instances of the same view controller can list any of four different sorts of music library contents: albums, playlists, podcasts, or audiobooks. The view controller’s behavior is slightly different in each case. So I need a sort of four-way switch that I can set once when the view controller is instantiated, saying which sort of contents this view controller is to display. That sounds like an enum!
Here’s the basic declaration for that enum; I call it Filter, because each case represents a different way of filtering the contents of the music library:
enum Filter { case albums case playlists case podcasts case books }
That enum doesn’t have an initializer. You can write an initializer for an enum, as I’ll demonstrate in a moment; but there is a default mode of initialization that you’ll probably use most of the time — the name of the enum followed by dot-notation and one of the cases. For example, here’s how to make an instance of Filter representing the albums
case:
let type = Filter.albums
As a shortcut, if the type is known in advance, you can omit the name of the enum; the bare case must still be preceded by a dot. For example:
let type : Filter = .albums
You can’t say .albums
just anywhere out of the blue, because Swift doesn’t know what enum it belongs to. But in that code, the variable is explicitly declared as a Filter, so Swift knows what .albums
means. A similar thing happens when passing an enum instance as an argument in a function call:
func filterExpecter(_ type:Filter) {} filterExpecter(.albums)
In the second line, I create an instance of Filter and pass it, all in one move, without having to include the name of the enum. That’s because Swift knows from the function declaration that a Filter is expected here.
In real life, the space savings when omitting the enum name can be considerable — especially because, when talking to Cocoa, the enum type names are often long. For example:
let v = UIView() v.contentMode = .center
A UIView’s contentMode
property is typed as a UIViewContentMode enum. Our code is neater and simpler because we don’t have to include the name UIViewContentMode explicitly here; .center
is nicer than UIViewContentMode.center
. But either is legal.
Instances of an enum with the same case are regarded as equal. Thus, you can compare an enum instance for equality against a case. Again, the type of enum is known from the first term in the comparison, so the second term can omit the enum name:
func filterExpecter(_ type:Filter) { if type == .albums { print("it's albums") } } filterExpecter(.albums) // "it's albums"
Optionally, when you declare an enum, you can add a type declaration. The cases then all carry with them a fixed (constant) value of that type. If the type is an integer numeric type, the values can be implicitly assigned, and will start at zero by default. For example:
enum PepBoy : Int { case manny case moe case jack }
In that code, .manny
carries a value of 0
, .moe
carries of a value of 1
, and so on.
If the type is String, the implicitly assigned values are the string equivalents of the case names. For example:
enum Filter : String { case albums case playlists case podcasts case books }
In that code, .albums
carries a value of "albums"
, and so on.
Regardless of the type, you can assign values explicitly as part of the case declarations, like this:
enum Filter : String { case albums = "Albums" case playlists = "Playlists" case podcasts = "Podcasts" case books = "Audiobooks" }
The types attached to an enum in this way are limited to numbers and strings, and the values assigned must be literals. The values carried by the cases are called their raw values. An instance of this enum has just one case, so it has just one fixed raw value, which can be retrieved with its rawValue
property:
let type = Filter.albums print(type.rawValue) // Albums
Having each case carry a fixed raw value can be quite useful. In my Albumen app, the Filter cases really do have those String values, and type
is a view controller property; and so when the view controller wants to know what title string to put at the top of the screen, it simply retrieves self.type.rawValue
.
The raw value associated with each case must be unique within this enum; the compiler will enforce this rule. Therefore, the mapping works the other way: given a raw value, you can derive the case. For example, you can instantiate an enum that has raw values by using its init(rawValue:)
initializer:
let type = Filter(rawValue:"Albums")
However, the attempt to instantiate the enum in this way might fail, because you might supply a raw value corresponding to no case; therefore, this is a failable initializer, and the value returned is an Optional. In that code, type
is not a Filter; it’s an Optional wrapping a Filter. This might not be terribly important, however, because the thing you are most likely to want to do with an enum is to compare it for equality with a case of the enum; you can do that with an Optional without unwrapping it. This code is legal and works correctly:
let type = Filter(rawValue:"Albums") if type == .albums { // ...
The raw values discussed in the preceding section are fixed in advance: a given case carries with it a certain raw value, and that’s that. Alternatively, you can construct a case whose constant value can be set when the instance is created. To do so, do not declare any type for the enum as a whole; instead, append a tuple type to the name of the case. There will usually be just one type in this tuple, so what you’ll write will look like a type name in parentheses. Any type may be declared. Here’s an example:
enum Error { case number(Int) case message(String) case fatal }
That code means that, at instantiation time, an Error instance with the .number
case must be assigned an Int value, an Error instance with the .message
case must be assigned a String value, and an Error instance with the .fatal
case can’t be assigned any value. Instantiation with assignment of a value is really a way of calling an initialization function, so to supply the value, you pass it as an argument in parentheses:
let err : Error = .number(4)
The attached value here is called an associated value. What you are supplying as you specify the associated value is actually a tuple, so it can contain literal values or value references; this is legal:
let num = 4 let err : Error = .number(num)
The tuple can contain more than one value, with or without labels; if the values have labels, they must be used at initialization time:
enum Error { case number(Int) case message(String) case fatal(n:Int, s:String) } let err : Error = .fatal(n:-12, s:"Oh the horror")
An enum case that declares an associated value is actually an initialization function, so you can capture a reference to that function and call the function later:
let fatalMaker = Error.fatal let err = fatalMaker(n:-1000, s:"Unbelievably bad error")
I’ll explain how to extract the associated value from an actual instance of such an enum in Chapter 5.
At the risk of sounding like a magician explaining his best trick, I will now reveal how an Optional works. An Optional is simply an enum with two cases: .none
and .some
. If it is .none
, it carries no associated value, and it equates to nil
. If it is .some
, it carries the wrapped value as its associated value.
An explicit enum initializer must do what default initialization does: it must return a particular case of this enum. To do so, set self
to the case. In this example, I’ll expand my Filter enum so that it can be initialized with a numeric argument:
enum Filter : String { case albums = "Albums" case playlists = "Playlists" case podcasts = "Podcasts" case books = "Audiobooks" static var cases : [Filter] = [.albums, .playlists, .podcasts, .books] init(_ ix:Int) { self = Filter.cases[ix] } }
Now there are three ways to make a Filter instance:
let type1 = Filter.albums let type2 = Filter(rawValue:"Playlists")! let type3 = Filter(2) // .podcasts
In that example, we’ll crash in the third line if the caller passes a number that’s out of range (less than 0 or greater than 3). If we want to avoid that, we can make this a failable initializer and return nil
if the number is out of range:
enum Filter : String { case albums = "Albums" case playlists = "Playlists" case podcasts = "Podcasts" case books = "Audiobooks" static var cases : [Filter] = [.albums, .playlists, .podcasts, .books] init?(_ ix:Int) { if !(0...3).contains(ix) { return nil } self = Filter.cases[ix] } }
An enum can have multiple initializers. Enum initializers can delegate to one another by saying self.init(...)
. The only requirement is that, at some point in the calling chain, self
must be set to a case; if that doesn’t happen, your enum won’t compile.
In this example, I improve my Filter enum so that it can be initialized with a String raw value without having to say rawValue:
in the call. To do so, I declare a failable initializer with a string parameter that delegates to the built-in failable rawValue:
initializer:
enum Filter : String { case albums = "Albums" case playlists = "Playlists" case podcasts = "Podcasts" case books = "Audiobooks" static var cases : [Filter] = [.albums, .playlists, .podcasts, .books] init?(_ ix:Int) { if !(0...3).contains(ix) { return nil } self = Filter.cases[ix] } init?(_ rawValue:String) { self.init(rawValue:rawValue) } }
Now there are four ways to make a Filter instance:
let type1 = Filter.albums let type2 = Filter(rawValue:"Playlists") let type3 = Filter(2) // .Podcasts, wrapped in an Optional let type4 = Filter("Playlists")
An enum can have instance properties and static properties, but there’s a limitation: an enum instance property can’t be a stored property. This makes sense, because if two instances of the same case could have different stored instance property values, they would no longer be equal to one another — which would undermine the nature and purpose of enums.
Computed instance properties are fine, however, and the value of the property can vary by rule in accordance with the case of self
. In this example from my real code, I’ve associated an MPMediaQuery (obtained by calling an MPMediaQuery factory class method) with each case of my Filter enum, suitable for fetching the songs of that type from the music library:
enum Filter : String { case albums = "Albums" case playlists = "Playlists" case podcasts = "Podcasts" case books = "Audiobooks" var query : MPMediaQuery { switch self { case .albums: return .albums() case .playlists: return .playlists() case .podcasts: return .podcasts() case .books: return .audiobooks() } } }
If an enum instance property is a computed variable with a setter, other code can assign to this property. However, that code’s reference to the enum instance must be a variable (var
), not a constant (let
). If you try to assign to an enum instance property through a let
reference, you’ll get a compile error.
An enum can have instance methods (including subscripts) and static methods. Writing an enum method is straightforward. Here’s an example from my own code. In a card game, the cards draw themselves as rectangles, ellipses, or diamonds. I’ve abstracted the drawing code into an enum that draws itself as a rectangle, an ellipse, or a diamond, depending on its case:
enum Shape { case rectangle case ellipse case diamond func addShape (to p: CGMutablePath, in r : CGRect) -> () { switch self { case .rectangle: p.addRect(r) case .ellipse: p.addEllipse(in:r) case .diamond: p.move(to: CGPoint(x:r.minX, y:r.midY)) p.addLine(to: CGPoint(x: r.midX, y: r.minY)) p.addLine(to: CGPoint(x: r.maxX, y: r.midY)) p.addLine(to: CGPoint(x: r.midX, y: r.maxY)) p.closeSubpath() } } }
An enum instance method that modifies the enum itself must be marked as mutating
. For example, an enum instance method might assign to an instance property of self
; even though this is a computed property, such assignment is illegal unless the method is marked as mutating
. An enum instance method can even change the case of self
, by assigning to self
; but again, the method must be marked as mutating
. The caller of a mutating instance method must have a variable reference to the instance (var
), not a constant reference (let
).
In this example, I add an advance
method to my Filter enum. The idea is that the cases constitute a sequence, and the sequence can cycle. By calling advance
, I transform a Filter instance into an instance of the next case in the sequence:
enum Filter : String { case albums = "Albums" case playlists = "Playlists" case podcasts = "Podcasts" case books = "Audiobooks" static var cases : [Filter] = [.albums, .playlists, .podcasts, .books] mutating func advance() { var ix = Filter.cases.index(of:self)! ix = (ix + 1) % 4 self = Filter.cases[ix] } }
And here’s how to call it:
var type = Filter.books type.advance() // type is now Filter.albums
(A subscript setter is always considered mutating
and does not have to be specially marked.)
An enum is a switch whose states have names. There are many situations where that’s a desirable thing. You could implement a multistate value yourself; for example, if there are five possible states, you could use an Int whose values can be 0 through 4. But then you would have to provide a lot of additional overhead — making sure that no other values are used, and interpreting those numeric values correctly. A list of five named cases is much better! Even when there are only two states, an enum is often better than, say, a mere Bool, because the enum’s states have names. With a Bool, you have to know what true
and false
signify in a particular usage; with an enum, the name of the enum and the names of its cases tell you its significance. Moreover, you can store extra information in an enum’s associated value or raw value; you can’t do that with a mere Bool.
For example, in my LinkSame app, the user can play a real game with a timer or a practice game without a timer. At various places in the code, I need to know which type of game this is. The game types are the cases of an enum:
enum InterfaceMode : Int { case timed = 0 case practice = 1 }
The current game type is stored in an instance property interfaceMode
, whose value is an InterfaceMode. Thus, it’s easy to set the game type by case name:
// ... initialize new game ... self.interfaceMode = .timed
And it’s easy to examine the game type by case name:
// notify of high score only if user is not just practicing if self.interfaceMode == .timed { // ...
So what are the raw value integers for? That’s the really clever part. They correspond to the segment indexes of a UISegmentedControl in the interface! Whenever I change the interfaceMode
property, a setter observer also selects the corresponding segment of the UISegmentedControl (self.timedPractice
), simply by fetching the rawValue
of the current enum case:
var interfaceMode : InterfaceMode = .timed { willSet (mode) { self.timedPractice?.selectedSegmentIndex = mode.rawValue } }
A struct is the Swift object type par excellence. An enum, with its fixed set of cases, is a reduced, specialized kind of object. A class, at the other extreme, will often turn out to be overkill; it has some features that a struct lacks, but if you don’t need those features, a struct may be preferable.
Of the numerous object types declared in the Swift header, only three are classes (and you are unlikely to encounter any of them consciously). On the contrary, nearly all the built-in object types provided by Swift itself are structs. A String is a struct. An Int is a struct. A Range is a struct. An Array is a struct. And so on. That shows how powerful a struct can be.
A struct that doesn’t have an explicit initializer and that doesn’t need an explicit initializer — because it has no stored properties, or because all its stored properties are assigned default values as part of their declaration — automatically gets an implicit initializer with no parameters, init()
. For example:
struct Digit { var number = 42 }
That struct can be initialized by saying Digit()
. But if you add any explicit initializers of your own, you lose that implicit initializer:
struct Digit { var number = 42 init(number:Int) { self.number = number } }
Now you can say Digit(number:42)
, but you can’t say Digit()
any longer. Of course, you can add an explicit initializer that does the same thing:
struct Digit { var number = 42 init() {} init(number:Int) { self.number = number } }
Now you can say Digit()
once again, as well as Digit(number:42)
.
A struct that has stored properties and that doesn’t have an explicit initializer automatically gets an implicit initializer derived from its instance properties. This is called the memberwise initializer. For example:
struct Digit { var number : Int // could use "let" here instead }
That struct is legal — indeed, it is legal even if the number
property is declared with let
instead of var
— even though it seems we have not fulfilled the contract requiring us to initialize all stored properties in their declaration or in an initializer. The reason is that this struct automatically has a memberwise initializer which does initialize all its properties. In this case, the memberwise initializer is init(number:)
, and you can say Digit(number:42)
.
The memberwise initializer exists even for var
stored properties that are assigned a default value in their declaration; thus, this struct has a memberwise initializer init(number:)
, in addition to its implicit init()
initializer:
struct Digit { var number = 42 }
But if you add any explicit initializers of your own, you lose the memberwise initializer (though of course you can write an explicit initializer that does the same thing).
If a struct has any explicit initializers, then they must fulfill the contract that all stored properties must be initialized either by direct initialization in the declaration or by all initializers. If a struct has multiple explicit initializers, they can delegate to one another by saying self.init(...)
.
A struct can have instance properties and static properties, and they can be stored or computed variables. If other code wants to set a property of a struct instance, its reference to that instance must be a variable (var
), not a constant (let
).
A struct can have instance methods (including subscripts) and static methods. If an instance method sets a property, it must be marked as mutating
, and the caller’s reference to the struct instance must be a variable (var
), not a constant (let
). A mutating
instance method can even replace this instance with another instance, by setting self
to a different instance of the same struct. (A subscript setter is always considered mutating
and does not have to be specially marked.)
I very often use a degenerate struct as a handy namespace for constants. I call such a struct “degenerate” because it consists entirely of static members; I don’t intend to use this object type to make any instances.
For example, let’s say I’m going to be storing user preference information in Cocoa’s UserDefaults. UserDefaults is a kind of dictionary: each item is accessed through a key. The keys are typically strings. A common programmer mistake is to write out these string keys literally every time a key is used; if you then misspell a key name, there’s no penalty at compile time, but your code will mysteriously fail to work correctly. The proper approach is to embody these keys as constant strings and use the names of the strings; that way, if you make a mistake typing the name of a string, the compiler can catch you. A struct with static members is a great way to define those constant strings and clump their names into a namespace:
struct Default { static let rows = "CardMatrixRows" static let columns = "CardMatrixColumns" static let hazyStripy = "HazyStripy" }
That code means that I can now refer to a UserDefaults key with a name, such as Default.hazyStripy
.
A class is similar to a struct, with the following key differences:
Classes are reference types. This means, among other things, that a class instance has two remarkable features that are not true of struct instances or enum instances:
let
), you can change the value of an instance property through that reference. An instance method of a class never has to be marked mutating
(and cannot be).In Objective-C, classes are the only object type. Some built-in Swift struct types are magically bridged to Objective-C class types, but your custom struct types don’t have that magic. Thus, when programming iOS with Swift, a primary reason for declaring a class, rather than a struct, is as a form of interchange with Objective-C and Cocoa.
A major difference between enums and structs, on the one hand, and classes, on the other, is that enums and structs are value types, whereas classes are reference types.
A value type is not mutable in place, even though it seems to be. For example, consider a struct. A struct is a value type:
struct Digit { var number : Int init(_ n:Int) { self.number = n } }
Now, Swift’s syntax of assignment would lead us to believe that changing a Digit’s number
is possible:
var d = Digit(123) d.number = 42
But in reality, when you apparently mutate an instance of a value type, you are actually replacing that instance with a different instance. To see that this is true, add a setter observer:
var d : Digit = Digit(123) { didSet { print("d was set") } } d.number = 42 // "d was set"
That explains why it is impossible to mutate a value type instance if the reference to that instance is declared with let
:
struct Digit { var number : Int init(_ n:Int) { self.number = n } } let d = Digit(123) d.number = 42 // compile error
Under the hood, this change would require us to replace the Digit instance pointed to by d
with another Digit instance — and we can’t do that, because it would mean assigning into d
, which is exactly what the let
declaration forbids.
That, in turn, is why an instance method of a struct or enum that sets a property of the instance must be marked explicitly with the mutating
keyword. For example:
struct Digit { var number : Int init(_ n:Int) { self.number = n } mutating func changeNumberTo(_ n:Int) { self.number = n } }
Without the mutating
keyword, that code won’t compile. The mutating
keyword assures the compiler that you understand what’s really happening here. If that method is called, it replaces the instance; therefore, it can be called only on a reference declared with var
, not let
:
let d = Digit(123) d.changeNumberTo(42) // compile error
None of what I’ve just said, however, applies to class instances! Class instances are reference types, not value types. An instance property of a class, to be settable, must be declared with var
, obviously; but the reference to a class instance does not have to be declared with var
in order to set that property through that reference:
class Dog { var name : String = "Fido" } let rover = Dog() rover.name = "Rover" // fine
In the last line of that code, the class instance pointed to by rover
is being mutated in place. No implicit assignment to rover
is involved, and so the let
declaration is powerless to prevent the mutation. A setter observer on a Dog variable is not called when a property is set:
var rover : Dog = Dog() { didSet { print("did set rover") } } rover.name = "Rover" // nothing in console
The setter observer would be called if we were to set rover
explicitly (to another Dog instance), but it is not called merely because we change a property of the Dog instance already pointed to by rover
.
Those examples involve a declared variable reference. Exactly the same difference between a value type and a reference type may be seen with a parameter of a function call. When we receive an instance of a value type as a parameter into a function body, the compiler will stop us in our tracks if we try to assign to its instance property. This doesn’t compile:
func digitChanger(_ d:Digit) { d.number = 42 // compile error }
But this does compile:
func dogChanger(_ d:Dog) { d.name = "Rover" }
With a reference type, there is in effect a concealed level of indirection between your reference to the instance and the instance itself; the reference actually refers to a pointer to the instance. This, in turn, has another important implication: it means that when a class instance is assigned to a variable or passed as an argument to a function or as the result of a function, you can wind up with multiple references to the same object. That is not true of structs and enums. When an enum instance or a struct instance is assigned or passed, what is assigned or passed is essentially a new copy of that instance. But when a class instance is assigned or passed, what is assigned or passed is a reference to the same instance.
To prove it, I’ll assign one reference to another, and then mutate the second reference — and then I’ll examine what happened to the first reference. Let’s start with the struct:
var d = Digit(123) print(d.number) // 123 var d2 = d // assignment! d2.number = 42 print(d.number) // 123
In that code, we changed the number
property of d2
, a struct instance; but nothing happened to the number
property of d
. Now let’s try the class:
var fido = Dog() print(fido.name) // Fido var rover = fido // assignment! rover.name = "Rover" print(fido.name) // Rover
In that code, we changed the name
property of rover
, a class instance — and the name
property of fido
was changed as well! That’s because, after the assignment in the third line, fido
and rover
refer to one and the same instance.
The same thing is true of parameter passing. With a class instance, what is passed is a reference to the same instance:
func dogChanger(_ d:Dog) { d.name = "Rover" } var fido = Dog() print(fido.name) // "Fido" dogChanger(fido) print(fido.name) // "Rover"
The change made to d
inside the function dogChanger
affected our Dog instance fido
! You can’t do that with an enum or struct instance parameter, because the instance is effectively copied as it is passed. But handing a class instance to a function does not copy that instance; it is more like lending that instance to the function.
The ability to generate multiple references to the same instance is significant particularly in a world of object-based programming, where objects persist and can have properties that persist along with them. If object A and object B are both long-lived objects, and if they both have a Dog property (where Dog is a class), and if they have each been handed a reference to one and the same Dog instance, then either object A or object B can mutate its Dog, and this mutation will affect the other’s Dog. You can thus be holding on to an object, only to discover that it has been mutated by someone else behind your back. If that happens unexpectedly, it can put your program into an invalid state.
Class instances are also more complicated behind the scenes. Swift has to manage their memory, precisely because there can be multiple references to the same object; this management can involve quite a bit of overhead. At an even lower level, the mere storage of class instances in memory entails some necessary overhead.
On the whole, therefore, you should prefer a value type (such as a struct) to a reference type (a class) wherever possible. Struct instances are not shared between references, and so you are relieved from any worry about such an instance being mutated behind your back; moreover, under the hood, storage and memory management are far simpler as well. New in Swift 3, the language itself will help you by imposing value types in front of many Cocoa Foundation reference types. For example, Objective-C NSDate and NSData are classes, but Swift 3 will steer you toward using struct types Date and Data instead.
But don’t get the wrong idea. Classes are not bad; they’re good! For one thing, a class instance is very efficient to pass around, because all you’re passing is a pointer. No matter how big and complicated a class instance may be, no matter how many properties it may have containing vast amounts of data, passing the instance is incredibly fast and efficient, because no new data is generated.
Even more important, there are many situations where the independent identity of a class instance, no matter how many times it is referred to, is exactly what you want. The extended lifetime of a class instance, as it is passed around, can be crucial to its functionality and integrity. In particular, only a class instance can successfully represent an independent reality. For example, a UIView needs to be a class, not a struct, because an individual UIView instance, no matter how it gets passed around, must continue to represent the same single real and persistent view in your running app’s interface.
Still another reason for preferring a class over a struct or enum is when you need recursive references. A value type cannot be structurally recursive: a stored instance property of a value type cannot be an instance of the same type. This code won’t compile:
struct Dog { // compile error var puppy : Dog? }
More complex circular chains, such as a Dog with a Puppy property and a Puppy with a Dog property, are similarly illegal. But if Dog is a class instead of a struct, there’s no error. This is a consequence of the nature of memory management of value types as opposed to reference types. The moral is clear: if you need a property of a Dog to be a Dog, Dog has to be a class.
Two classes can be subclass and superclass of one another. For example, we might have a class Quadruped and a class Dog and make Quadruped the superclass of Dog. A class may have many subclasses, but a class can have only one immediate superclass. I say “immediate” because that superclass might itself have a superclass, and so on in a rising chain, until we get to the ultimate superclass, called the base class, or root class. Because a class can have many subclasses but only one superclass, there is a hierarchical tree of subclasses, each branching from its superclass, and so on, with a single class, the base class, at the top.
As far as the Swift language itself is concerned, there is no requirement that a class should have any superclass, or, if it does have a superclass, that it should ultimately be descended from any particular base class. Thus, a Swift program can have many classes that have no superclass, and it can have many independent hierarchical subclass trees, each descended from a different base class.
Cocoa, however, doesn’t work that way. In Cocoa, there is effectively just one base class — NSObject, which embodies all the functionality necessary for a class to be a class in the first place — and all other classes are subclasses, at some level, of that one base class. Cocoa thus consists of one huge tree of hierarchically arranged classes, even before you write a single line of code or create any classes of your own.
We can imagine diagramming this tree as an outline. And in fact Xcode will show you this outline (Figure 4.1): in an iOS project window, choose View → Navigators → Show Symbol Navigator and click Hierarchical, with the first and third icons in the filter bar selected (blue). The Cocoa classes are the part of the tree descending from NSObject.
The reason for having a superclass–subclass relationship in the first place is to allow related classes to share functionality. Suppose, for example, we have a Dog class and a Cat class, and we are considering declaring a walk
method for both of them. We might reason that both a dog and a cat walk in pretty much the same way, by virtue of both being quadrupeds. So it might make sense to declare walk
as a method of the Quadruped class, and make both Dog and Cat subclasses of Quadruped. The result is that both Dog and Cat can be sent the walk
message, even if neither of them has a walk
method, because each of them has a superclass that does have a walk
method. We say that a subclass inherits the methods of its superclass.
To declare that a certain class is a subclass of a certain superclass, add a colon and the superclass name after the class’s name in its declaration. So, for example:
class Quadruped { func walk () { print("walk walk walk") } } class Dog : Quadruped {} class Cat : Quadruped {}
Now let’s prove that Dog has indeed inherited walk
from Quadruped:
let fido = Dog() fido.walk() // walk walk walk
Observe that, in that code, the walk
message can be sent to a Dog instance just as if the walk
instance method were declared in the Dog class, even though the walk
instance method is in fact declared in a superclass of Dog. That’s inheritance at work.
The purpose of subclassing is not merely so that a class can inherit another class’s methods; it’s so that it can also declare methods of its own. Typically, a subclass consists of the methods inherited from its superclass and then some. If Dog has no methods of its own, after all, it’s hard to see why it should exist separately from Quadruped. But if a Dog knows how to do something that not every Quadruped knows how to do — let’s say, bark — then it makes sense as a separate class. If we declare bark
in the Dog class, and walk
in the Quadruped class, and make Dog a subclass of Quadruped, then Dog inherits the ability to walk from the Quadruped class and also knows how to bark:
class Quadruped { func walk () { print("walk walk walk") } } class Dog : Quadruped { func bark () { print("woof") } }
Again, let’s prove that it works:
let fido = Dog() fido.walk() // walk walk walk fido.bark() // woof
Within a class, it is a matter of indifference whether that class has an instance method because that method is declared in that class or because the method is declared in a superclass and inherited. A message to self
works equally well either way. In this code, we have declared a barkAndWalk
instance method that sends two messages to self
, without regard to where the corresponding methods are declared (one is native to the subclass, one is inherited from the superclass):
class Quadruped { func walk () { print("walk walk walk") } } class Dog : Quadruped { func bark () { print("woof") } func barkAndWalk() { self.bark() self.walk() } }
And here’s proof that it works:
let fido = Dog() fido.barkAndWalk() // woof walk walk walk
It is also permitted for a subclass to redefine a method inherited from its superclass. For example, perhaps some dogs bark differently from other dogs. We might have a class NoisyDog, for instance, that is a subclass of Dog. Dog declares bark
, but NoisyDog also declares bark
, and defines it differently from how Dog defines it. This is called overriding. The very natural rule is that if a subclass overrides a method inherited from its superclass, then when the corresponding message is sent to an instance of that subclass, it is the subclass’s version of that method that is called.
In Swift, when you override something inherited from a superclass, you must explicitly acknowledge this fact by preceding its declaration with the keyword override
. So, for example:
class Quadruped { func walk () { print("walk walk walk") } } class Dog : Quadruped { func bark () { print("woof") } } class NoisyDog : Dog { override func bark () { print("woof woof woof") } }
And let’s try it:
let fido = Dog() fido.bark() // woof let rover = NoisyDog() rover.bark() // woof woof woof
Observe that a subclass method by the same name as a superclass’s method is not necessarily, of itself, an override. Recall that Swift can distinguish two functions with the same name, provided they have different signatures. Those are different functions, and so an implementation of one in a subclass is not an override of the other in a superclass. An override situation exists only when the subclass redefines the same method that it inherits from a superclass — using the same name, including the external parameter names, and the same signature.
It often happens that we want to override something in a subclass and yet access the thing overridden in the superclass. This is done by sending a message to the keyword super
. Our bark
implementation in NoisyDog is a case in point. What NoisyDog really does when it barks is the same thing Dog does when it barks, but more times. We’d like to express that relationship in our implementation of NoisyDog’s bark
. To do so, we have NoisyDog’s bark
implementation send the bark
message, not to self
(which would be circular), but to super
; this causes the search for a bark
instance method implementation to start in the superclass rather than in our own class:
class Dog : Quadruped { func bark () { print("woof") } } class NoisyDog : Dog { override func bark () { for _ in 1...3 { super.bark() } } }
And it works:
let fido = Dog() fido.bark() // woof let rover = NoisyDog() rover.bark() // woof woof woof
A subscript function is a method. If a superclass declares a subscript, the subclass can declare a subscript with the same signature, provided it designates it with the override
keyword. To call the superclass subscript implementation, the subclass can use square brackets after the keyword super
(e.g. super[3]
).
Along with methods, a subclass also inherits its superclass’s properties. Naturally, the subclass may also declare additional properties of its own. It is possible to override an inherited property (with some restrictions that I’ll talk about later).
A class declaration can prevent the class from being subclassed by preceding the class declaration with the final
keyword. A class declaration can prevent a class member from being overridden by a subclass by preceding the member’s declaration with the final
keyword.
Initialization of a class instance is considerably more complicated than initialization of a struct or enum instance, because of the existence of class inheritance. The chief task of an initializer is to ensure that all properties have an initial value, thus making the instance well-formed as it comes into existence; and an initializer may have other tasks to perform that are essential to the initial state and integrity of this instance. A class, however, may have a superclass, which may have properties and initializers of its own. Thus we must somehow ensure that when a subclass is initialized, its superclass’s properties are initialized and the tasks of its initializers are performed in good order, in addition to initializing the properties and performing the initializer tasks of the subclass itself.
Swift solves this problem coherently and reliably — and ingeniously — by enforcing some clear and well-defined rules about what a class initializer must do.
The rules begin with a distinction between the kinds of initializer that a class can have:
self.init(...)
.convenience
.
It is a delegating initializer; it must contain the phrase self.init(...)
. Moreover, a convenience initializer must delegate to a designated initializer: when it says self.init(...)
, it must call a designated initializer in the same class — or else it must call another convenience initializer in the same class, thus forming a chain of convenience initializers which ends by calling a designated initializer in the same class.init()
.
Here are some examples. This class has no stored properties, so it has an implicit init()
designated initializer:
class Dog { } let d = Dog()
This class’s stored properties have default values, so it has an implicit init()
designated initializer too:
class Dog { var name = "Fido" } let d = Dog()
This class’s stored properties have default values, but it has no implicit init()
initializer because it has an explicit designated initializer:
class Dog { var name = "Fido" init(name:String) {self.name = name} } let d = Dog(name:"Rover") // ok let d2 = Dog() // compile error
This class’s stored properties have default values, and it has an explicit initializer, but it also has an implicit init()
initializer because its explicit initializer is a convenience initializer. Moreover, the implicit init()
initializer is a designated initializer, so the convenience initializer can delegate to it:
class Dog { var name = "Fido" convenience init(name:String) { self.init() self.name = name } } let d = Dog(name:"Rover") let d2 = Dog()
This class has stored properties without default values; it has an explicit designated initializer, and all of those properties are initialized in that designated initializer:
class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } } let d = Dog(name:"Rover", license:42)
This class is similar to the previous example, but it also has convenience initializers forming a chain that ends with a designated initializer:
class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } convenience init(license:Int) { self.init(name:"Fido", license:license) } convenience init() { self.init(license:1) } } let d = Dog()
Note that the rules about what else an initializer can say and when it can say it, as I described them earlier in this chapter, are still in force:
self
until all of this class’s properties have been initialized.self
until after it has called, directly or indirectly, a designated initializer (and cannot set an immutable property at all).Having defined and distinguished between designated initializers and convenience initializers, we are ready for the rules about what happens with regard to initializers when a class is itself a subclass of some other class:
init()
initializer unless it inherits it from its superclass.)self
with the designated initializers that the convenience initializers must call.If a subclass declares any designated initializers of its own, the entire game changes drastically. Now, no initializers are inherited! The existence of an explicit designated initializer blocks initializer inheritance. The only initializers the subclass now has are the initializers that you explicitly write. (However, there’s an exception, which I’ll come to in a moment.)
Every designated initializer in the subclass now has an extra requirement: it must call one of the superclass’s designated initializers, by saying super.init(...)
. Moreover, the rules about saying self
continue to apply. A subclass designated initializer must do things in this order:
super.init(...)
, and the initializer that it calls must be a designated initializer.self
for any other reason — to call an instance method, say, or to access an inherited property.If a designated initializer doesn’t call super.init(...)
, then super.init()
is called implicitly if possible. (I don’t like this feature of Swift: in my view, Swift should not indulge in secret behavior, even if that behavior might be considered “helpful.”)
self.init(...)
, calling a designated initializer directly or (through a chain of convenience initializers) indirectly. There are no inherited initializers, so the designated initializer that a convenience initializer calls must be declared in the subclass.Superclass initializers can be overridden in the subclass, in accordance with these restrictions:
override
.override
. The superclass designated initializer that an override designated initializer calls with super.init(...)
can be the one that it overrides.Generally, if a subclass has any designated initializers, no initializers are inherited. But if a subclass overrides all of its superclass’s designated initializers, then the subclass does inherit the superclass’s convenience initializers.
If an initializer called by a failable initializer is failable, the calling syntax does not change, and no additional test is needed — if a called failable initializer fails, the whole initialization process will fail (and will be aborted) immediately.
There are some additional restrictions on failable initializers:
init
can override init?
, but not vice versa.init?
can call init
.init
can call init?
by saying init
and unwrapping the result with an exclamation mark (and if the init?
fails, you’ll crash).At no time can a subclass initializer set a constant (let
) property of a superclass. This is because, by the time the subclass is allowed to do anything other than initialize its own properties and call another initializer, the superclass has finished its own initialization and the door for initializing its constants has closed.
Here are some basic examples. We start with a class whose subclass has no explicit initializers of its own:
class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } convenience init(license:Int) { self.init(name:"Fido", license:license) } } class NoisyDog : Dog { }
Given that code, you can make a NoisyDog like this:
let nd1 = NoisyDog(name:"Fido", license:1) let nd2 = NoisyDog(license:2)
That code is legal, because NoisyDog inherits its superclass’s initializers. However, you can’t make a NoisyDog like this:
let nd3 = NoisyDog() // compile error
That code is illegal. Even though a NoisyDog has no properties of its own, it has no implicit init()
initializer; its initializers are its inherited initializers, and its superclass, Dog, has no implicit init()
initializer to inherit.
Now here is a class whose subclass’s only explicit initializer is a convenience initializer:
class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } convenience init(license:Int) { self.init(name:"Fido", license:license) } } class NoisyDog : Dog { convenience init(name:String) { self.init(name:name, license:1) } }
Observe how NoisyDog’s convenience initializer fulfills its contract by calling self.init(...)
to call a designated initializer — which it happens to have inherited. Given that code, there are three ways to make a NoisyDog, just as you would expect:
let nd1 = NoisyDog(name:"Fido", license:1) let nd2 = NoisyDog(license:2) let nd3 = NoisyDog(name:"Rover")
Next, here is a class whose subclass declares a designated initializer:
class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } convenience init(license:Int) { self.init(name:"Fido", license:license) } } class NoisyDog : Dog { init(name:String) { super.init(name:name, license:1) } }
NoisyDog’s explicit initializer is now a designated initializer. It fulfills its contract by calling a designated initializer in super
. NoisyDog has now cut off inheritance of all initializers; the only way to make a NoisyDog is like this:
let nd1 = NoisyDog(name:"Rover")
Finally, here is a class whose subclass overrides its designated initializers:
class Dog { var name : String var license : Int init(name:String, license:Int) { self.name = name self.license = license } convenience init(license:Int) { self.init(name:"Fido", license:license) } } class NoisyDog : Dog { override init(name:String, license:Int) { super.init(name:name, license:license) } }
NoisyDog has overridden all of its superclass’s designated initializers, so it inherits its superclass’s convenience initializers. There are thus two ways to make a NoisyDog:
let nd1 = NoisyDog(name:"Rover", license:1) let nd2 = NoisyDog(license:2)
Those examples illustrate the main rules that you should keep in your head. You probably don’t need to memorize the remaining rules, because the compiler will enforce them, and will keep slapping you down until you get them right.
There’s one more thing to know about class initializers: a class initializer may be preceded by the keyword required
. This means that a subclass may not lack it. This, in turn, means that if a subclass implements designated initializers, thus blocking inheritance, it must override this initializer. Here’s a (rather pointless) example:
class Dog { var name : String required init(name:String) { self.name = name } } class NoisyDog : Dog { var obedient = false init(obedient:Bool) { self.obedient = obedient super.init(name:"Fido") } } // compile error
That code won’t compile. init(name:)
is marked required
; thus, our code won’t compile unless we inherit or override init(name:)
in NoisyDog. But we cannot inherit it, because, by implementing the NoisyDog designated initializer init(obedient:)
, we have blocked inheritance. Therefore we must override it:
class Dog { var name : String required init(name:String) { self.name = name } } class NoisyDog : Dog { var obedient = false init(obedient:Bool) { self.obedient = obedient super.init(name:"Fido") } required init(name:String) { super.init(name:name) } }
Observe that our overridden required initializer is not marked with override
, but is marked with required
, thus guaranteeing that the requirement continues drilling down to any further subclasses.
I have explained what declaring an initializer as required
does, but I have not explained why you’d need to do it. I’ll give examples later in this chapter.
A class, and only a class (not the other flavors of object type), can have a deinitializer. This is a function declared with the keyword deinit
followed by curly braces containing the function body. You never call this function yourself; it is called by the runtime when an instance of this class goes out of existence. If a class has a superclass, the subclass’s deinitializer (if any) is called before the superclass’s deinitializer (if any).
The idea of a deinitializer is that you might want to perform some cleanup, or just log to the console to prove to yourself that your instance is going out of existence in good order. I’ll take advantage of deinitializers when I discuss memory management issues in Chapter 5.
A subclass can override its inherited properties. The override must have the same name and type as the inherited property, and must be marked with override
. (A property cannot have the same name as an inherited property but a different type, as there is no way to distinguish them.)
The chief restriction here is that an override
property cannot be a stored property. More specifically:
Alternatively, the subclass’s override may be a computed property. In that case:
The overriding property’s functions may refer to — and may read from and write to — the inherited property, through the super
keyword.
A class can have static members, marked static
, just like a struct or an enum. It can also have class members, marked class
. Both static and class members are inherited by subclasses.
The chief difference between static and class methods from the programmer’s point of view is that a static method cannot be overridden; it is as if static
were a synonym for class final
.
Here, for example, I’ll use a static method to express what dogs say:
class Dog { static func whatDogsSay() -> String { return "woof" } func bark() { print(Dog.whatDogsSay()) } }
A subclass now inherits whatDogsSay
, but can’t override it. No subclass of Dog may contain any implementation of a class method or a static method whatDogsSay
with this same signature.
Now I’ll use a class method to express what dogs say:
class Dog { class func whatDogsSay() -> String { return "woof" } func bark() { print(Dog.whatDogsSay()) } }
A subclass inherits whatDogsSay
, and can override it, either as a class method or as a static method:
class NoisyDog : Dog { override class func whatDogsSay() -> String { return "WOOF" } }
The difference between static properties and class properties is similar, but with an additional, rather dramatic qualification: static properties can be stored, but class properties can only be computed.
Here, I’ll use a static class property to express what dogs say:
class Dog { static var whatDogsSay = "woof" func bark() { print(Dog.whatDogsSay) } }
A subclass inherits whatDogsSay
, but can’t override it; no subclass of Dog can declare a class or static property whatDogsSay
.
Now I’ll use a class property to express what dogs say. It cannot be a stored property, so I’ll have to use a computed property instead:
class Dog { class var whatDogsSay : String { return "woof" } func bark() { print(Dog.whatDogsSay) } }
A subclass inherits whatDogsSay
and can override it either as a class property or as a static property. But even as a static property the subclass’s override cannot be a stored property, in keeping with the rules of property overriding that I outlined earlier:
class NoisyDog : Dog { override static var whatDogsSay : String { return "WOOF" } }
When a computer language has a hierarchy of types and subtypes, it must resolve the question of what such a hierarchy means for the relationship between the type of an object and the declared type of a reference to that object. Swift obeys the principles of polymorphism. In my view, it is polymorphism that turns an object-based language into a full-fledged object-oriented language. We may summarize Swift’s polymorphism principles as follows:
To see what these principles mean in practice, imagine we have a Dog class, along with its subclass, NoisyDog:
class Dog { } class NoisyDog : Dog { } let d : Dog = NoisyDog()
The substitution rule says that the last line is legal: we can assign a NoisyDog instance to a reference, d
, that is typed as a Dog. The internal identity rule says that, under the hood, d
now is a NoisyDog.
You may be asking: How is the internal identity rule manifested? If a reference to a NoisyDog is typed as a Dog, in what sense is this “really” a NoisyDog? To illustrate, let’s examine what happens when a subclass overrides an inherited method. Let me redefine Dog and NoisyDog to demonstrate:
class Dog { func bark() { print("woof") } } class NoisyDog : Dog { override func bark() { super.bark(); super.bark() } }
Now try to guess what happens when this code runs:
func tellToBark(_ d:Dog) { d.bark() } var nd = NoisyDog() tellToBark(nd)
That code is legal, because, by the substitution principle, we can pass nd
, typed as a NoisyDog, where a Dog is expected. Now, inside the tellToBark
function, d
is typed as a Dog. How will it react to being told to bark
? On the one hand, d
is typed as a Dog, and a Dog barks by saying "woof"
once. On the other hand, in our code, when tellToBark
is called, what is really passed is a NoisyDog instance, and a NoisyDog barks by saying "woof"
twice. What will happen? Let’s find out:
func tellToBark(_ d:Dog) { d.bark() } var nd = NoisyDog() tellToBark(nd) // woof woof
The result is "woof woof"
. The internal identity rule says that what matters when a message is sent is not how the recipient of that message is typed through this or that reference, but what that recipient actually is. What arrives inside tellToBark
is a NoisyDog, regardless of the type of variable that holds it; thus, the bark
message causes this object to say "woof"
twice.
Here’s another important consequence of polymorphism — the meaning of the keyword self
. It means the actual instance, and thus its meaning depends upon the type of the actual instance — even if the word self
appears in a superclass’s code. For example:
class Dog { func bark() { print("woof") } func speak() { self.bark() } } class NoisyDog : Dog { override func bark() { super.bark(); super.bark() } }
What happens when we tell a NoisyDog to speak
? Let’s try it:
let nd = NoisyDog() nd.speak() // woof woof
The speak
method is declared in Dog, the superclass — not in NoisyDog. The speak
method calls the bark
method. It does this by way of the keyword self
. (I could have omitted the explicit reference to self
here, but self
would still be involved implicitly, so I’m not cheating by making self
explicit.) There’s a bark
method in Dog, and an override of the bark
method in NoisyDog. Which bark
method will be called?
The word self
is encountered within the Dog class’s implementation of speak
. But what matters is not where the word self
appears but what it means. It means this instance. And the internal identity principle tells us that this instance is a NoisyDog! Thus, it is NoisyDog’s override of bark
that is called.
Polymorphism applies to Optional types in the same way that it applies to the type of thing wrapped by the Optional. Suppose we have a reference typed as an Optional wrapping a Dog. You already know that you can assign a Dog to it. Well, you can also assign a NoisyDog, or an Optional wrapping a NoisyDog, and the underlying wrapped object will maintain its integrity:
var d : Dog? d = Dog() d = NoisyDog() d = Optional(NoisyDog())
(The applicability of polymorphism to Optionals derives from a special dispensation of the Swift language: Optionals are covariant. I’ll talk more about that later in this chapter.)
Thanks to polymorphism, you can take advantage of subclasses to add power and customization to existing classes. This is important particularly in the world of iOS programming, where most of the classes are defined by Cocoa and don’t belong to you. The UIViewController class, for example, is defined by Cocoa; it has lots of built-in methods that Cocoa will call, and these methods perform various important tasks — but in a generic way. In real life, you’ll make a UIViewController subclass and override those methods to do the tasks appropriate to your particular app. This won’t bother Cocoa in the slightest, because (substitution principle) wherever Cocoa expects to receive or to be talking to a UIViewController, it will accept without question an instance of your UIViewController subclass. And this substitution will also work as expected, because (internal identity principle) whenever Cocoa calls one of those UIViewController methods on your subclass, it is your subclass’s override that will be called.
Polymorphism is cool, but it is also slow. It requires dynamic dispatch, meaning that the compiler can’t perform certain optimizations, and that the runtime has to think about what a message to a class instance means. You can reduce the need for dynamic dispatch by declaring a class or a class member final
or private
, or by turning on Whole Module Optimization. Or use a struct, if appropriate; structs don’t need dynamic dispatch.
The Swift compiler, with its strict typing, imposes severe restrictions on what messages can be sent to an object reference. The messages that the compiler will permit to be sent to an object reference depend upon the reference’s declared type. But the internal identity principle of polymorphism says that, under the hood, an object may have a real type that is different from its reference’s declared type. Such an object may be capable of receiving messages that the compiler won’t permit us to send.
To illustrate, let’s give NoisyDog a method that Dog doesn’t have:
class Dog { func bark() { print("woof") } } class NoisyDog : Dog { override func bark() { super.bark(); super.bark() } func beQuiet() { self.bark() } }
In that code, we configure a NoisyDog so that we can tell it to beQuiet
. Now look at what happens when we try to tell an object typed as a Dog to be quiet:
func tellToHush(_ d:Dog) { d.beQuiet() // compile error } let nd = NoisyDog() tellToHush(nd)
Our code doesn’t compile. We can’t send the beQuiet
message to the reference d
inside the function body, because it is typed as a Dog — and a Dog has no beQuiet
method. But there is a certain irony here: for once, we happen to know more than the compiler does — namely, that this object is in fact a NoisyDog and does have a beQuiet
method! Our code would run correctly — because d
really is a NoisyDog — if only we could get our code to compile in the first place. We need a way to say to the compiler, “Look, compiler, just trust me: this thing is going to turn out to be a NoisyDog when the program actually runs, so let me send it this message.”
There is in fact a way to do this — casting. To cast, you use a form of the keyword as
followed by the name of the type you claim something really is. Swift will not let you cast just any old type to any old other type — for example, you can’t cast a String to an Int — but it will let you cast a superclass to a subclass. This is called casting down. When you cast down, the form of the keyword as
that you must use is as!
with an exclamation mark. The exclamation mark reminds you that you are forcing the compiler to do something it would rather not do:
func tellToHush(_ d:Dog) { (d as! NoisyDog).beQuiet() } let nd = NoisyDog() tellToHush(nd)
That code compiles, and works. A useful way to rewrite the example is like this:
func tellToHush(_ d:Dog) { let d2 = d as! NoisyDog d2.beQuiet() // ... other NoisyDog messages to d2 can go here ... } let nd = NoisyDog() tellToHush(nd)
The reason that way of rewriting the code is useful is in case we have other NoisyDog messages to send to this object. Instead of casting every time we want to send a message to it, we cast the object once to its internal identity type, and assign it to a variable. Now that variable’s type — inferred, in this case, from the cast — is the internal identity type, and we can send multiple messages to the variable.
A moment ago, I said that the as!
operator’s exclamation mark reminds you that you are forcing the compiler’s hand. It also serves as a warning: your code can now crash! The reason is that you might be lying to the compiler. Casting down is a way of telling the compiler to relax its strict type checking and to let you call the shots. If you use casting to make a false claim, the compiler may permit it, but you will crash when the app runs:
func tellToHush(_ d:Dog) { (d as! NoisyDog).beQuiet() // compiles, but prepare to crash...! } let d = Dog() tellToHush(d)
In that code, we told the compiler that this object would turn out to be a NoisyDog, and the compiler obediently took its hands off and allowed us to send the beQuiet
message to it. But in fact, this object was a Dog when our code ran, and so we ultimately crashed when the cast failed because this object was not a NoisyDog.
To prevent yourself from lying accidentally, you can test the type of an instance at runtime. One way to do this is with the keyword is
. You can use is
in a condition; if the condition passes, then cast, in the knowledge that your cast is safe:
func tellToHush(_ d:Dog) { if d is NoisyDog { let d2 = d as! NoisyDog d2.beQuiet() } }
The result is that we won’t cast d
to a NoisyDog unless it really is a NoisyDog.
An alternative way to solve the same problem is to use Swift’s as?
operator. This casts down, but with the option of failure; therefore what it casts to is (you guessed it) an Optional — and now we are on familiar ground, because we know how to deal safely with an Optional:
func tellToHush(_ d:Dog) { let noisyMaybe = d as? NoisyDog // an Optional wrapping a NoisyDog if noisyMaybe != nil { noisyMaybe!.beQuiet() } }
That doesn’t look much cleaner or shorter than our previous approach. But remember that we can safely send a message to an Optional by optionally unwrapping the Optional! Thus we can skip the assignment and condense to a single line:
func tellToHush(_ d:Dog) { (d as? NoisyDog)?.beQuiet() }
First we use the as?
operator to obtain an Optional wrapping a NoisyDog (or nil
). Then we optionally unwrap that Optional and send a message to it. If d
isn’t a NoisyDog, the Optional will be nil
and the message won’t be sent. If d
is a NoisyDog, the Optional will be unwrapped and the message will be sent. Thus, that code is safe.
Recall from Chapter 3 that comparison operators applied to an Optional are automatically applied to the object wrapped by the Optional. The is
, as!
, and as?
operators work the same way.
Let’s start with is
. Consider an Optional d
ostensibly wrapping a Dog (that is, d
is a Dog?
object). It might, in actual fact, be wrapping either a Dog or a NoisyDog. To find out which it is, you might be tempted to use is
. But can you? After all, an Optional is neither a Dog nor a NoisyDog — it’s an Optional! Nevertheless, Swift knows what you mean; when the thing on the left side of is
is an Optional, Swift pretends that it’s the value wrapped in the Optional. Thus, this works just as you would hope:
let d : Dog? = NoisyDog() if d is NoisyDog { // it is!
When using is
with an Optional, the test fails in good order if the Optional is nil
. Thus our is
test really does two things: it checks whether the Optional is nil
, and if it is not, it then checks whether the wrapped value is the type we specify.
What about casting? You can’t really cast an Optional to anything. Nevertheless, Swift knows what you mean; you can use the as!
operator with an Optional. When the thing on the left side of as!
is an Optional, Swift treats it as the wrapped type. Moreover, the consequence of applying the as!
operator is that two things happen: Swift unwraps first, and then casts. This code works, because d
is unwrapped to give us d2
, which is a NoisyDog:
let d : Dog? = NoisyDog() let d2 = d as! NoisyDog d2.beQuiet()
That code, however, is not safe. You shouldn’t cast like that, without testing first, unless you are very sure of your ground. If d
were nil
, you’d crash in the second line because you’re trying to unwrap a nil
Optional. And if d
were a Dog, not a NoisyDog, you’d still crash in the second line when the cast fails. That’s why there’s also an as?
operator, which is safe — but yields an Optional:
let d : Dog? = NoisyDog() let d2 = d as? NoisyDog d2?.beQuiet()
Another way you’ll use casting is during a value interchange between Swift and Objective-C when two types are equivalent. For example, you can cast a Swift String to a Cocoa NSString, and vice versa. That’s not because one is a subclass of the other, but because they are bridged to one another; in a very real sense, they are the same type. When you cast from String to NSString, you’re not casting down, and what you’re doing is not dangerous, so you use the as
operator, with no exclamation mark. I gave an example, in Chapter 3, of a situation where you might need to do that:
let s = "hello" let range = (s as NSString).range(of:"ell") // (1,3), an NSRange
The cast from String to NSString tells Swift to stay in the Cocoa world as it calls range(of:)
, and thus causes the result to be the Cocoa result, an NSRange, rather than a Swift Range.
In Swift 3, in general, to cross the bridge from a Swift type to a bridged Objective-C type, you will need to cast explicitly (except in the case of a string literal):
let s : NSString = "howdy" // literal string to NSString let s2 = "howdy" let s3 : NSString = s2 as NSString // String to NSString let i : NSNumber = 1 as NSNumber // Int to NSNumber
That sort of code, however, is rather artificial. In real life, you won’t be casting all that often, because the Cocoa API will present itself to you in terms of Swift types. For example, this is legal with no cast:
let name = "MyNib" // Swift String let vc = ViewController(nibName:name, bundle:nil)
That’s legal, not because the Swift String name
magically crosses the bridge to NSString as it is assigned to nibName:
, but because nibName:
is typed as a Swift String (actually, an Optional wrapping a String). The bridge, in effect, is crossed later. Similarly, no cast is required here:
let ud = UserDefaults.standard let s = "howdy" ud.set(s, forKey:"Test")
The Swift String s
doesn’t magically cross the bridge when you use it as the first argument to set(_:forKey:)
; rather, the first argument of set(_:forKey:)
is typed as a Swift type, namely Any (actually, an Optional wrapping Any) — and any Swift type can be used, without casting, where an Any is expected. I’ll talk more about Any later in this chapter.
Coming back the other way, it is possible that you’ll receive from Objective-C a value about whose real underlying type Swift has no information. In that case, you’ll probably want to cast explicitly to the underlying type — and now you are casting down, with all that that implies. For example, here’s what happens when we go to retrieve the "howdy"
that we put into UserDefaults in the previous example:
let ud = UserDefaults.standard let test = ud.object(forKey:"Test") as! String
When we call ud.object(forKey:)
, Swift has no type information; the result is an Any (actually, an Optional wrapping Any). But we know that this particular call should yield a string — because that’s what we put in to begin with. So we can force-cast this value down to a String — and it works. However, if ud.object(forKey:"Test")
were not a string (or if it were nil
), we’d crash. If you’re not sure of your ground, use is
or as?
to be safe. I’ll discuss this kind of downcasting in more detail later on.
It can be useful for an instance to refer to its own type — for example, to send a message to that type. In an earlier example, a Dog instance method fetched a Dog class property by sending a message to the Dog type explicitly — by using the word Dog
:
class Dog { class var whatDogsSay : String { return "Woof" } func bark() { print(Dog.whatDogsSay) } }
The expression Dog.whatDogsSay
seems clumsy and inflexible. Why should we have to hard-code into Dog a knowledge of what class it is? It has a class; it should just know what it is.
In Swift, you can access the type of an object reference’s underlying object through the type(of:)
function. Thus, if you don’t like the notion of a Dog instance calling a Dog class method by saying Dog
explicitly, there’s another way:
class Dog { class var whatDogsSay : String { return "Woof" } func bark() { print(type(of:self).whatDogsSay) } }
An important thing about using type(of:)
instead of hard-coding a class name is that it obeys polymorphism:
class Dog { class var whatDogsSay : String { return "Woof" } func bark() { print(type(of:self).whatDogsSay) } } class NoisyDog : Dog { override class var whatDogsSay : String { return "Woof woof woof" } }
Now watch what happens:
let nd = NoisyDog() nd.bark() // Woof woof woof
If we tell a NoisyDog instance to bark
, it says "Woof woof woof"
. The reason is that type(of:)
means, “The type that this object actually is, right now.” We send the bark
message to a NoisyDog instance. The bark
implementation refers to type(of:self)
; self
means this instance, which is a NoisyDog, and so type(of:self)
is the NoisyDog class, and it is NoisyDog’s version of whatDogsSay
that is fetched.
You can also use type(of:)
for learning the name of an object’s type, as a string — typically for debugging purposes. When you say print(type(of:myObject))
, you’ll see the type name in the console.
In some situations, you may want to pass an object type as a value. That is legal; an object type is itself an object. Here’s what you need to know:
Type
.self
using dot-notation, or hand an object to type(of:)
.
For example, here’s a function dogTypeExpecter
that accepts a Dog type as its parameter:
func dogTypeExpecter(_ whattype:Dog.Type) { }
And here’s an example of calling that function:
dogTypeExpecter(Dog.self)
Or you could call it like this:
let d = Dog() dogTypeExpecter(type(of:d))
The substitution principle applies, so you could call dogTypeExpecter
starting with a NoisyDog instead:
dogTypeExpecter(NoisyDog.self) let nd = NoisyDog() dogTypeExpecter(type(of:nd))
Why might you want to do something like that? A typical situation is that your function is a factory for instances: given a type, it creates an instance of that type, possibly prepares it in some way, and returns it. You can use a variable reference to a type — what Swift calls a metatype — to make an instance of that type, by explicitly sending it an init(...)
message.
For example, here’s a Dog class with an init(name:)
initializer, and its NoisyDog subclass:
class Dog { var name : String init(name:String) { self.name = name } } class NoisyDog : Dog { }
And here’s a factory method that creates a Dog or a NoisyDog, as specified by its parameter, gives it a name, and returns it:
func dogMakerAndNamer(_ whattype:Dog.Type) -> Dog { let d = whattype.init(name:"Fido") // compile error return d }
However, there’s a problem. The code doesn’t compile. The reason is that the compiler is in doubt as to whether the init(name:)
initializer is implemented by every possible subtype of Dog. To reassure it, we must declare that initializer with the required
keyword:
class Dog { var name : String required init(name:String) { self.name = name } } class NoisyDog : Dog { }
I promised I’d tell you why you might need to declare an initializer as required
; now I’m fulfilling that promise! The required
designation reassures the compiler; every subclass of Dog must inherit or reimplement init(name:)
, so it’s legal to send the init(name:)
message to a type reference that might refer to Dog or some subclass of Dog. Now our code compiles, and we can call our function:
let d = dogMakerAndNamer(Dog.self) // d is a Dog named Fido let d2 = dogMakerAndNamer(NoisyDog.self) // d2 is a NoisyDog named Fido
In a class method, self
stands for the class — polymorphically. This means that, in a class method, you can send a message to self
to call an initializer polymorphically. Here’s an example. Let’s say we want to move our instance factory method into Dog itself, as a class method. Let’s call this class method makeAndName
. We want this class method to create and return a named Dog of whatever class we send the makeAndName
message to. If we say Dog.makeAndName()
, we should get a Dog. If we say NoisyDog.makeAndName()
, we should get a NoisyDog. So our makeAndName
class method initializes polymorphic self
:
class Dog { var name : String required init(name:String) { self.name = name } class func makeAndName() -> Dog { let d = self.init(name:"Fido") return d } } class NoisyDog : Dog { }
It works as expected:
let d = Dog.makeAndName() // d is a Dog named Fido let d2 = NoisyDog.makeAndName() // d2 is a NoisyDog named Fido
But there’s a problem. Although d2
is in fact a NoisyDog, it is typed as a Dog. This is because our makeAndName
class method is declared as returning a Dog. That isn’t what we meant to declare. What we want to declare is that this method returns an instance of the same type as the class to which the makeAndName
message was originally sent. In other words, we need a polymorphic type declaration! That type is Self
(notice the capitalization). It is used as a return type in a method declaration to mean “an instance of whatever type this is at runtime.” Thus:
class Dog { var name : String required init(name:String) { self.name = name } class func makeAndName() -> Self { let d = self.init(name:"Fido") return d } } class NoisyDog : Dog { }
Now when we call NoisyDog.makeAndName()
we get a NoisyDog typed as a NoisyDog.
Self
also works for instance method declarations. Therefore, we can write an instance method version of our factory method. Here, we start with a Dog or a NoisyDog and tell it to have a puppy of the same type as itself:
class Dog { var name : String required init(name:String) { self.name = name } func havePuppy(name:String) -> Self { return type(of:self).init(name:name) } } class NoisyDog : Dog { }
And here’s some code to test it:
let d = Dog(name:"Fido") let d2 = d.havePuppy(name:"Fido Junior") let nd = NoisyDog(name:"Rover") let nd2 = nd.havePuppy(name:"Rover Junior")
As expected, d2
is a Dog, but nd2
is a NoisyDog typed as a NoisyDog.
All this terminology can get a bit confusing, so here’s a quick summary:
type(of:)
type(of:)
..Type
Dog
means a Dog instance is expected (or an instance of one its subclasses), but Dog.Type
means that the Dog type itself is expected (or the type of one of its subclasses)..self
Dog.Type
is expected, you can pass Dog.self
.self
self.init(...)
instantiates the type.Self
A protocol is a way of expressing commonalities between otherwise unrelated types. For example, a Bee object and a Bird object might need to have certain features in common by virtue of the fact that both a bee and a bird can fly. Thus, it might be useful to define a Flier type. The question is: In what sense can both Bee and Bird be Fliers?
One possibility, of course, is class inheritance. If Bee and Bird are both classes, there’s a class hierarchy of superclasses and subclasses. So Flier could be the superclass of both Bee and Bird. The problem is that there may be other reasons why Flier can’t be the superclass of both Bee and Bird. A Bee is an Insect; a Bird isn’t. Yet they both have the power of flight — independently. We need a type that cuts across the class hierarchy somehow, tying remote classes together.
Moreover, what if Bee and Bird are not both classes? In Swift, that’s a very real possibility. Important and powerful objects can be structs instead of classes. But there is no struct hierarchy of superstructs and substructs! That, after all, is one of the major differences between structs and classes. Yet structs need the ability to possess and express formal commonalities every bit as much as classes do. How can a Bee struct and a Bird struct both be Fliers?
Swift solves this problem through the use of protocols. Protocols are tremendously important in Swift; the Swift header defines over 70 of them! Moreover, Objective-C has protocols as well; Swift protocols correspond roughly to these, and can interchange with them. Cocoa makes heavy use of protocols.
A protocol is an object type, but there are no protocol objects — you can’t instantiate a protocol. A protocol is much more lightweight than that. A protocol declaration is just a list of properties and methods. The properties have no values, and the methods have no code! The idea is that a “real” object type can formally declare that it belongs to a protocol type; this is called adopting or conforming to the protocol. An object type that adopts a protocol is signing a contract stating that it actually implements the properties and methods listed by the protocol.
For example, let’s say that being a Flier consists of no more than implementing a fly
method. Then a Flier protocol could specify that there must be a fly
method; to do so, it lists the fly
method with no function body, like this:
protocol Flier { func fly() }
Any type — an enum, a struct, a class, or even another protocol — can then adopt this protocol. To do so, it lists the protocol after a colon after its name in its declaration. (If the adopter is a class with a superclass, the protocol comes after a comma after the superclass specification.)
Let’s say Bird is a struct. Then it can adopt Flier like this:
struct Bird : Flier { } // compile error
So far, so good. But that code won’t compile. The Bird struct has made a promise to implement the features listed in the Flier protocol. Now it must keep that promise! The fly
method is the only requirement of the Flier protocol. To satisfy that requirement, I’ll just give Bird an empty fly
method:
protocol Flier { func fly() } struct Bird : Flier { func fly() { } }
That’s all there is to it! We’ve defined a protocol, and we’ve made a struct adopt that protocol. Of course, in real life you’ll probably want to make the adopter’s implementation of the protocol’s methods do something; but the protocol says nothing about that.
A protocol can also declare a method and provide its implementation, thanks to protocol extensions, which I’ll discuss later in this chapter.
Perhaps at this point you’re scratching your head over why this is a useful thing to do. We made a Bird a Flier, but so what? If we wanted a Bird to know how to fly, why didn’t we just give Bird a fly
method without adopting any protocol? The answer has to do with types. Don’t forget, a protocol is a type. Our protocol, Flier, is a type. Therefore, I can use Flier wherever I would use a type — to declare the type of a variable, for example, or the type of a function parameter:
func tellToFly(_ f:Flier) { f.fly() }
Think about that code for a moment, because it embodies the entire point of protocols. A protocol is a type — so polymorphism applies. Protocols give us another way of expressing the notion of type and subtype. This means that, by the substitution principle, a Flier here could be an instance of any object type — an enum, a struct, or a class. It doesn’t matter what object type it is, as long as it adopts the Flier protocol. If it adopts the Flier protocol, it can be passed where a Flier is expected. Moreover, if it adopts the Flier protocol, then it must have a fly
method, because that’s exactly what it means to adopt the Flier protocol! Therefore the compiler is willing to let us send the fly
message to this object.
The converse, however, is not true: an object with a fly
method is not automatically a Flier. It isn’t enough to obey the requirements of a protocol; the object type must adopt the protocol. This code won’t compile:
struct Bee { func fly() { } } let b = Bee() tellToFly(b) // compile error
A Bee can be sent the fly
message, qua Bee. But tellToFly
doesn’t take a Bee parameter; it takes a Flier parameter. Formally, a Bee is not a Flier. To make a Bee a Flier, simply declare formally that Bee adopts the Flier protocol. This code does compile:
struct Bee : Flier { func fly() { } } let b = Bee() tellToFly(b)
Enough of birds and bees; we’re ready for a real-life example! As I’ve already said, Swift is chock full of protocols already. Let’s make one of our own object types adopt one. One of the most useful Swift protocols is CustomStringConvertible. The CustomStringConvertible protocol requires that we implement a description
String property. If we do that, a wonderful thing happens: when an instance of this type is used in string interpolation or print
(or the po
command in the console), the description
property value is used automatically to represent it.
Recall, for example, the Filter enum, from earlier in this chapter. I’ll add a description
property to it:
enum Filter : String { case albums = "Albums" case playlists = "Playlists" case podcasts = "Podcasts" case books = "Audiobooks" var description : String { return self.rawValue } }
But that isn’t enough, in and of itself, to give Filter the power of the CustomStringConvertible protocol; to do that, we also need to adopt the CustomStringConvertible protocol formally. There is already a colon and a type in the Filter declaration, so an adopted protocol comes after a comma:
enum Filter : String, CustomStringConvertible { case albums = "Albums" case playlists = "Playlists" case podcasts = "Podcasts" case books = "Audiobooks" var description : String { return self.rawValue } }
We have now made Filter formally adopt the CustomStringConvertible protocol. The CustomStringConvertible protocol requires that we implement a description
String property; we do implement a description
String property, so our code compiles. Now we can hand a Filter to print
, or interpolate it into a string, and its description
will appear automatically:
let type = Filter.albums print(type) // Albums print("It is \(type)") // It is Albums
Behold the power of protocols. You can give any object type the power of string conversion in exactly the same way.
Note that a type can adopt more than one protocol! For example, the built-in Double type adopts CustomStringConvertible, Hashable, Comparable, and other built-in protocols. To declare adoption of multiple protocols, list each one after the first protocol in the declaration, separated by comma. For example:
struct MyType : CustomStringConvertible, Hashable, Comparable { // ... }
(Of course, that code won’t compile unless I also declare the required methods in MyType, so that MyType really does adopt those protocols.)
A protocol is a type, and an adopter of a protocol is its subtype. Polymorphism applies. Therefore, the operators for mediating between an object’s declared type and its real type work when the object is declared as a protocol type. For example, given a protocol Flier that is adopted by both Bird and Bee, we can use the is
operator to test whether a particular Flier is in fact a Bird:
func isBird(_ f:Flier) -> Bool { return f is Bird }
Similarly, as!
and as?
can be used to cast an object declared as a protocol type down to its actual type. This is important to be able to do, because the adopting object will typically be able to receive messages that the protocol can’t receive. For example, let’s say that a Bird can get a worm:
struct Bird : Flier { func fly() { } func getWorm() { } }
A Bird can fly
qua Flier, but it can getWorm
only qua Bird. Thus, you can’t tell just any old Flier to get a worm:
func tellGetWorm(_ f:Flier) { f.getWorm() // compile error }
But if this Flier is a Bird, clearly it can get a worm. That is exactly what casting is all about:
func tellGetWorm(f:Flier) { (f as? Bird)?.getWorm() }
Protocol declaration can take place only at the top level of a file. To declare a protocol, use the keyword protocol
followed by the name of the protocol, which, being an object type, should start with a capital letter. Then come curly braces which may contain the following:
A property declaration in a protocol consists of var
(not let
), the property name, a colon, its type, and curly braces containing the word get
or the words get set
. In the former case, the adopter’s implementation of this property can be writable, while in the latter case, it must be: the adopter may not implement a get set
property as a read-only computed property or as a constant (let
) stored property.
To declare a static/class property, precede it with the keyword static
. A class adopter is free to implement this as a class
property.
A method declaration in a protocol is a function declaration without a function body — that is, it has no curly braces and thus it has no code. Any object function type is legal, including init
and subscript
. (The syntax for declaring a subscript in a protocol is the same as the syntax for declaring a subscript in an object type, except that there will be no function bodies, so the curly braces, like those of a property declaration in a protocol, will contain get
or get set
.)
To declare a static/class method, precede it with the keyword static
. A class adopter is free to implement this as a class
method.
If a method, as implemented by an enum or struct, might need to be declared mutating
, the protocol must specify the mutating
designation; the adopter cannot add mutating
if the protocol lacks it. However, the adopter may omit mutating
if the protocol has it.
A protocol can itself adopt one or more protocols; the syntax is just as you would expect — a colon after the protocol’s name in the declaration, followed by a comma-separated list of the protocols it adopts. In effect, this gives you a way to create an entire secondary hierarchy of types! The Swift headers make heavy use of this.
A protocol that adopts another protocol may repeat the contents of the adopted protocol’s curly braces, for clarity; but it doesn’t have to, as this repetition is implicit. An object type that adopts such a protocol must satisfy the requirements of this protocol and all protocols that the protocol adopts.
If the only purpose of a protocol would be to combine other protocols by adopting all of them, without adding any new requirements, and if this combination is used in just one place in your code, you can avoid formally declaring the protocol in the first place by creating the combining protocol on the fly. To do so, join the protocol names with &
. This is called protocol composition.
In Objective-C, a protocol member can be declared optional, meaning that this member doesn’t have to be implemented by the adopter, but it may be. For compatibility with Objective-C, Swift allows optional protocol members, but only in a protocol explicitly bridged to Objective-C by preceding its declaration with the @objc
attribute. In such a protocol, an optional member is declared by preceding its declaration with the keywords @objc optional
:
@objc protocol Flier { @objc optional var song : String {get} @objc optional func sing() }
Only a class can adopt such a protocol, and this feature will work only if the class is an NSObject subclass, or if the optional member is marked with the @objc
attribute:
class Bird : Flier { @objc func sing() { print("tweet") } }
(All these @objc
markings are needed because optional protocol members are not really a Swift feature; they are an Objective-C feature! Therefore, everything about an optional protocol member must be explicitly exposed to Objective-C, so that Objective-C can implement it.)
An optional member is not guaranteed to be implemented by the adopter, so Swift doesn’t know whether it’s safe to send a Flier either the song
message or the sing
message.
In the case of an optional property like song
, Swift solves the problem by wrapping its value in an Optional. If the Flier adopter doesn’t implement the property, the result is nil
and no harm done:
let f : Flier = Bird() let s = f.song // s is an Optional wrapping a String
This is one of those rare situations where you can wind up with a double-wrapped Optional. For example, if the value of the optional property song
were itself a String?
, then fetching its value from a Flier would yield a String??
:
@objc protocol Flier { @objc optional var song : String? {get} @objc optional func sing() } let f : Flier = Bird() let s = f.song // s is an Optional wrapping an Optional wrapping a String
An optional property can be declared {get set}
by its protocol, but there is no legal syntax for setting such a property in an object of that protocol type. For example, if f
is a Flier and song
is declared {get set}
, you can’t set f.song
. I regard this as a bug in the language.
In the case of an optional method like sing
, things are more elaborate. If the method is not implemented, we must not be permitted to call it in the first place. To handle this situation, the method itself is automatically typed as an Optional version of its declared type. To send the sing
message to a Flier, therefore, you must unwrap it. The safe approach is to unwrap it optionally, with a question mark:
let f : Flier = Bird() f.sing?()
That code compiles — and it also runs safely. The effect is to send the sing
message to f
only if this Flier adopter implements sing
. If this Flier adopter doesn’t implement sing
, nothing happens. You could have force-unwrapped the call — f.sing!()
— but then your app would crash if the adopter doesn’t implement sing
.
If an optional method returns a value, that value is wrapped in an Optional as well. For example:
@objc protocol Flier { @objc optional var song : String {get} @objc optional func sing() -> String }
If we now call sing?()
on a Flier, the result is an Optional wrapping a String:
let f : Flier = Bird() let s = f.sing?() // s is an Optional wrapping a String
If we force-unwrap the call — sing!()
— the result is either a String (if the adopter implements sing
) or a crash (if it doesn’t).
Many Cocoa protocols have optional members. For example, your iOS app will have an app delegate class that adopts the UIApplicationDelegate protocol; this protocol has many methods, all of them optional. That fact, however, will have no effect on how you implement those methods; you don’t need to mark them in any special way. Your app delegate class is already a subclass of NSObject, so this feature just works. Either you implement a method or you don’t.
A protocol declared with the keyword class
after the colon after its name is a class protocol, meaning that it can be adopted only by class object types:
protocol SecondViewControllerDelegate : class { func accept(data:Any!) }
(There is no need to say class
if this protocol is already marked @objc
; the @objc
attribute implies that this is also a class protocol.)
A typical reason for declaring a class protocol is to take advantage of special memory management features that apply only to classes. I haven’t discussed memory management yet, but I’ll continue the example anyway (and I’ll repeat it when I do talk about memory management, in Chapter 5):
class SecondViewController : UIViewController { weak var delegate : SecondViewControllerDelegate? // ... }
The keyword weak
marks the delegate
property as having special memory management. Only a class instance can participate in this kind of special memory management. The delegate
property is typed as a protocol, and a protocol might be adopted by a struct or an enum type. So to satisfy the compiler that this object will in fact be a class instance, and not a struct or enum instance, the protocol is declared as a class protocol.
Suppose that a protocol declares an initializer. And suppose that a class adopts this protocol. By the terms of this protocol, this class and any subclass it may ever have must implement this initializer. Therefore, the class must not only implement the initializer, but it must also mark it as required
. An initializer declared in a protocol is thus implicitly required, and the class is forced to make that requirement explicit.
Consider this simple example, which won’t compile:
protocol Flier { init() } class Bird : Flier { init() {} // compile error }
That code generates an elaborate but perfectly informative compile error message: “Initializer requirement init()
can only be satisfied by a required
initializer in non-final class Bird.” To compile our code, we must designate our initializer as required
:
protocol Flier { init() } class Bird : Flier { required init() {} }
The alternative, as the compile error message informs us, would be to mark the Bird class as final
. This would mean that it cannot have any subclasses — thus guaranteeing that the problem will never arise in the first place. If Bird were marked final
, there would be no need to mark its init
as required
.
In the above code, Bird is not marked as final
, and its init
is marked as required
. This, as I’ve already explained, means in turn that any subclass of Bird that implements any designated initializers — and thus loses initializer inheritance — must implement the required initializer and mark it required
as well.
That fact is responsible for a strange and annoying feature of real-life iOS programming with Swift. Let’s say you subclass the built-in Cocoa class UIViewController — something that you are extremely likely to do. And let’s say you give your subclass an initializer — something that you are also extremely likely to do:
class ViewController: UIViewController { init() { super.init(nibName: "ViewController", bundle: nil) } }
That code won’t compile. The compile error says: “required
initializer init(coder:)
must be provided by subclass of UIViewController.”
What’s going on here? It turns out that UIViewController adopts a protocol, NSCoding. And this protocol requires an initializer init(coder:)
. None of that is your doing; UIViewController and NSCoding are declared by Cocoa, not by you. But that doesn’t matter! This is the same situation I was just describing. Your UIViewController subclass must either inherit init(coder:)
or must explicitly implement it and mark it required
. Well, your subclass has implemented a designated initializer of its own — thus cutting off initializer inheritance. Therefore it must implement init(coder:)
and mark it required
.
But that makes no sense if you are not expecting init(coder:)
ever to be called on your UIViewController subclass. You are being forced to write an initializer for which you can provide no meaningful functionality! Fortunately, Xcode’s Fix-it feature will offer to write the initializer for you, like this:
required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") }
That code satisfies the compiler. (I’ll explain in Chapter 5 why it’s a legal initializer even though it doesn’t fulfill an initializer’s contract.) It also deliberately crashes if it is ever called.
If you do have functionality for this initializer, you will delete the fatalError
line and insert your own functionality in its place. A minimum meaningful implementation would be super.init(coder:aDecoder)
, but of course if your class has properties that need initialization, you will need to initialize them first.
Not only UIViewController but lots of built-in Cocoa classes adopt NSCoding. You will encounter this problem if you subclass any of those classes and implement your own initializer. It’s just something you’ll have to get used to.
One of the wonderful things about Swift is that so many of its features, rather than being built-in and accomplished by magic, are implemented in Swift and are exposed to view in the Swift header. Literals are a case in point. The reason you can say 5
to make an Int whose value is 5, instead of formally initializing Int by saying Int(5)
, is not because of magic (or at least, not entirely because of magic). It’s because Int adopts a protocol, ExpressibleByIntegerLiteral. Not only Int literals, but all literals work this way. The following protocols are declared in the Swift header:
Your own object type can adopt a literal convertible protocol as well. This means that a literal can appear where an instance of your object type is expected! For example, here we declare a Nest type that contains some number of eggs (its eggCount
):
struct Nest : ExpressibleByIntegerLiteral { var eggCount : Int = 0 init() {} init(integerLiteral val: Int) { self.eggCount = val } }
Because Nest adopts ExpressibleByIntegerLiteral, we can pass an Int where a Nest is expected, and our init(integerLiteral:)
will be called automatically, causing a new Nest object with the specified eggCount
to come into existence at that moment:
func reportEggs(_ nest:Nest) { print("this nest contains \(nest.eggCount) eggs") } reportEggs(4) // this nest contains 4 eggs
A generic is a sort of placeholder for a type, into which an actual type will be slotted later. This is useful because of Swift’s strict typing. Without sacrificing that strict typing, there are situations where you can’t or don’t want to specify too precisely in a certain region of your code what the exact type of something is going to be.
An Optional is a good example. Any type of value can be wrapped up in an Optional. Yet there is no doubt as to what type is wrapped up in a particular Optional. How can this be? It’s because Optional is a generic type. Here’s how an Optional works.
I have already said that an Optional is an enum, with two cases: .none
and .some
. If an Optional’s case is .some
, it has an associated value — the value that is wrapped by this Optional. But what is the type of that associated value? On the one hand, one wants to say that it can be any type; that, after all, is why anything can be wrapped up in an Optional. On the other hand, any given Optional that wraps a value wraps a value of some specific type. When you unwrap an Optional, that unwrapped value needs to be typed as what it is, so that it can be sent messages appropriate for that type.
The solution to this sort of problem is a Swift generic. The declaration for the Optional enum in the Swift header starts like this:
enum Optional<Wrapped> { // ... }
That syntax means: “In the course of this declaration, I’m going to be using a made-up type — a type placeholder — that I call Wrapped. It’s a real and individual type, but I’m not going to say more about it right now. All you need to know is that whenever I say Wrapped, I mean this one particular type. When an actual Optional is created, it will be perfectly clear what type Wrapped stands for, and then, wherever I say Wrapped, you should substitute the type that it stands for.”
Let’s look at more of the Optional declaration:
enum Optional<Wrapped> { case none case some(Wrapped) init(_ some: Wrapped) // ... }
Having declared that Wrapped is a placeholder, we proceed to use it. There’s a case .none
. There’s also a case .some
, which has an associated value — of type Wrapped. We also have an initializer, which takes a parameter — of type Wrapped. Thus, the type with which we are initialized — whatever type that may be — is type Wrapped, and thus is the type of value that is associated with the .some
case.
Now, in the declaration of the Optional enum, Wrapped is a placeholder. But in real life, when an actual Optional is created, it will be initialized with an actual value of some definite type. Usually, we’ll use the question-mark syntactic sugar (type String?
) and the initializer will be called for us behind the scenes, but let’s call the initializer explicitly for the sake of clarity:
let s = Optional("howdy")
Obviously, "howdy"
here is a String. But we’re calling init(_ some: Wrapped)
, so "howdy"
is being supplied here as a Wrapped instance. As a result, the compiler knows that Wrapped is String throughout this particular Optional<Wrapped>
. This is called resolving (or specializing) the generic. Under the hood, wherever Wrapped appears in the declaration of the Optional enum, the compiler now substitutes String. Thus, the declaration for the particular Optional referred to by the variable s
looks, in the compiler’s mind, like this:
enum Optional<String> { case None case Some(String) init(_ some: String) // ... }
That is the pseudocode declaration of an Optional whose Wrapped placeholder has been replaced everywhere with the String type. We can summarize this by saying that s
is an Optional<String>
. In fact, that is legal syntax! We can create the same Optional like this:
let s : Optional<String> = "howdy"
As that example demonstrates, generics do not in any way relax Swift’s strict typing. In particular, they do not postpone resolution of a type until runtime. When you use a generic, your code will still specify its real type; that real type is known with complete specificity at compile time! The particular region of your code where the type is expected uses a generic so that it doesn’t have to specify the type fully, but at the point where that code is used by other code, the type is specified. The placeholder is generic, but it is resolved to an actual specific type whenever the generic is used.
Here’s a list of the places where generics, in one form or another, can be declared in Swift:
Self
In a protocol, use of the keyword Self
(note the capitalization) turns the protocol into a generic. Self
is a placeholder meaning the type of the adopter. For example, here’s a Flier protocol that declares a method that takes a Self
parameter:
protocol Flier { func flockTogetherWith(_ f:Self) }
That means that if the Bird object type were to adopt the Flier protocol, its implementation of flockTogetherWith
would need to declare its parameter as a Bird.
A protocol can declare an associated type using an associatedtype
statement. This turns the protocol into a generic; the associated type name is a placeholder. For example:
protocol Flier { associatedtype Other func flockTogetherWith(_ f:Other) func mateWith(_ f:Other) }
An adopter will declare some particular type where the generic uses the associated type name, thus resolving the placeholder. If the Bird struct adopts the Flier protocol and declares the parameter of flockTogetherWith
as a Bird, that declaration resolves Other to Bird for this particular adopter — and now Bird must declare the parameter for mateWith
as a Bird as well:
struct Bird : Flier { func flockTogetherWith(_ f:Bird) {} func mateWith(_ f:Bird) {} }
A function declaration can use a generic placeholder type for any of its parameters, for its return type, and within its body. Declare the placeholder name in angle brackets after the function name:
func takeAndReturnSameThing<T> (_ t:T) -> T { return t }
The caller will use some particular type where the placeholder appears in the function declaration, thus resolving the placeholder:
let thing = takeAndReturnSameThing("howdy")
Here, the type of the argument "howdy"
used in the call resolves T to String; therefore this call to takeAndReturnSameThing
will also return a String, and the variable capturing the result, thing
, is inferred to String as well.
An object type declaration can use a generic placeholder type anywhere within its curly braces. Declare the placeholder name in angle brackets after the object type name:
struct HolderOfTwoSameThings<T> { var firstThing : T var secondThing : T init(thingOne:T, thingTwo:T) { self.firstThing = thingOne self.secondThing = thingTwo } }
A user of this object type will use some particular type where the placeholder appears in the object type declaration, thus resolving the placeholder:
let holder = HolderOfTwoSameThings(thingOne:"howdy", thingTwo:"getLost")
Here, the type of the thingOne
argument, "howdy"
, used in the initializer call, resolves T to String; therefore thingTwo
must also be a String, and the properties firstThing
and secondThing
are Strings as well.
For generic functions and object types, which use the angle bracket syntax, the angle brackets may contain multiple placeholder names, separated by comma. For example:
func flockTwoTogether<T, U>(_ f1:T, _ f2:U) {}
The two parameters of flockTwoTogether
can now be resolved to two different types (though they do not have to be different).
All our examples so far have permitted any type to be substituted for the placeholder. Alternatively, you can limit the types that are eligible to be used for resolving a particular placeholder. This is called a type constraint. The simplest form of type constraint is to put a colon and a type name after the placeholder’s name when it first appears. The type name after the colon can be a class name or a protocol name.
For example, let’s return to our Flier and its flockTogetherWith
function. Suppose we want to say that the parameter of flockTogetherWith
should be declared by the adopter as a type that adopts Flier. You would not do that by declaring the type of that parameter as Flier in the protocol:
protocol Flier { func flockTogetherWith(_ f:Flier) }
That code says: You can’t adopt this protocol unless you declare a function flockTogetherWith
whose parameter is declared as Flier:
struct Bird : Flier { func flockTogetherWith(_ f:Flier) {} }
That isn’t what we want to say! We want to say that Bird should be able to adopt Flier while declaring its parameter as being of some Flier adopter type, such as Bird. The way to say that is to use a placeholder constrained as a Flier. For example, we could do it like this:
protocol Flier { associatedtype Other : Flier func flockTogetherWith(_ f:Other) }
Unfortunately, that’s illegal: a protocol can’t use itself as a type constraint. The workaround is to declare an extra protocol that Flier itself will adopt, and constrain Other to that protocol:
protocol Superflier {} protocol Flier : Superflier { associatedtype Other : Superflier func flockTogetherWith(_ f:Other) }
Now Bird can be a legal adopter like this:
struct Bird : Flier { func flockTogetherWith(_ f:Bird) {} }
In a generic function or a generic object type, the type constraint appears in the angle brackets. For example:
func flockTwoTogether<T:Flier>(_ f1:T, _ f2:T) {}
If Bird and Insect both adopt Flier, this flockTwoTogether
function can be called with two Bird arguments or with two Insect arguments — but not with a Bird and an Insect, because T is just one placeholder, signifying one Flier adopter type. And you can’t call flockTwoTogether
with two String parameters, because a String is not a Flier.
A type constraint on a placeholder is often useful as a way of assuring the compiler that some message can be sent to an instance of the placeholder type. For example, let’s say we want to implement a function myMin
that returns the smallest from a list of the same type. Here’s a promising implementation as a generic function, but there’s one problem — it doesn’t compile:
func myMin<T>(_ things:T...) -> T { var minimum = things[0] for ix in 1..<things.count { if things[ix] < minimum { // compile error minimum = things[ix] } } return minimum }
The problem is the comparison things[ix] < minimum
. How does the compiler know that the type T, the type of things[ix]
and minimum
, will be resolved to a type that can in fact be compared using the less-than operator in this way? It doesn’t, and that’s exactly why it rejects that code. The solution is to promise the compiler that the resolved type of T will in fact work with the less-than operator. The way to do that, it turns out, is to constrain T to Swift’s built-in Comparable protocol; adoption of the Comparable protocol exactly guarantees that the adopter does work with the less-than operator:
func myMin<T:Comparable>(_ things:T...) -> T {
Now myMin
compiles, because it cannot be called except by resolving T to an object type that adopts Comparable and hence can be compared with the less-than operator. Naturally, built-in object types that you think should be comparable, such as Int, Double, String, and Character, do in fact adopt the Comparable protocol! If you look in the Swift headers, you’ll find that the built-in min
global function is declared in just this way, and for just this reason.
A generic protocol (a protocol whose declaration mentions Self
or has an associated type) can be used as a type only in a generic, as a type constraint. If you try to use it in any other way, you’ll get a compile error: “Protocol can only be used as a generic constraint.” There’s a way around this restriction, called type erasure; for an excellent discussion of type erasure, see http://robnapier.net/erasure.
In the examples so far, the user of a generic resolves the placeholder’s type through inference. But there’s another way to perform resolution: the user can resolve the type manually. This is called explicit specialization. In some situations, explicit specialization is mandatory — namely, if the placeholder type cannot be resolved through inference. There are two forms of explicit specialization:
The adopter of a protocol can resolve the protocol’s associated type manually through a typealias
declaration using the protocol’s associated type name with an explicit type assignment. For example:
protocol Flier { associatedtype Other } struct Bird : Flier { typealias Other = String }
The user of a generic object type can resolve the object’s placeholder type(s) manually using the same angle bracket syntax used to declare the generic in the first place, with actual type names in the angle brackets. For example:
class Dog<T> { var name : T? } let d = Dog<String>()
(That explains the Optional<String>
type used earlier in this chapter and in Chapter 3.)
You cannot explicitly specialize a generic function. You can, however, declare a generic type with a nongeneric function that uses the generic type’s placeholder; explicit specialization of the generic type resolves the placeholder, and thus resolves the function:
protocol Flier { init() } struct Bird : Flier { init() {} } struct FlierMaker<T:Flier> { static func makeFlier() -> T { return T() } } let f = FlierMaker<Bird>.makeFlier() // returns a Bird
When a class is generic, you can subclass it, provided you resolve the generic. You can do this either through a matching generic subclass or by resolving the superclass generic explicitly. For example, here’s a generic Dog:
class Dog<T> { var name : T? }
You can subclass it as a generic whose placeholder matches that of the superclass:
class NoisyDog<T> : Dog<T> {}
That’s legal because the resolution of the NoisyDog placeholder T will resolve the Dog placeholder T. The alternative is to subclass an explicitly specialized Dog:
class NoisyDog : Dog<String> {}
When a generic placeholder is constrained to a generic protocol with an associated type, you can refer to that type using a dot-notation chain: the placeholder name, a dot, and the associated type name.
Here’s an example. Imagine that in a game program, soldiers and archers are enemies of one another. I’ll express this by subsuming a Soldier struct and an Archer struct under a Fighter protocol that has an Enemy associated type, which is itself constrained to be a Fighter (again, I’ll need an extra protocol that Fighter adopts):
protocol Superfighter {} protocol Fighter : Superfighter { associatedtype Enemy : Superfighter }
I’ll resolve that associated type manually for both structs:
struct Soldier : Fighter { typealias Enemy = Archer } struct Archer : Fighter { typealias Enemy = Soldier }
Now I’ll create a generic struct to express the opposing camps of these fighters:
struct Camp<T:Fighter> { }
Now suppose that a camp may contain a spy from the opposing camp. What is the type of that spy? Well, if this is a Soldier camp, it’s an Archer; and if it’s an Archer camp, it’s a Soldier. More generally, since T is a Fighter, it’s the type of the Enemy of this adopter of Fighter. I can express that neatly by a chain consisting of the placeholder name T
, a dot, and the associated type name Enemy
:
struct Camp<T:Fighter> { var spy : T.Enemy? }
The result is that if, for a particular Camp, T is resolved to Soldier, T.Enemy
means Archer — and vice versa. We have created a correct and inviolable rule for the type that a Camp’s spy
must be. This won’t compile:
var c = Camp<Soldier>() c.spy = Soldier() // compile error
We’ve tried to assign an object of the wrong type to this Camp’s spy
property. But this does compile:
var c = Camp<Soldier>() c.spy = Archer()
Longer chains of associated type names are possible — in particular, when a generic protocol has an associated type which is itself constrained to a generic protocol with an associated type.
For example, let’s give each type of Fighter a characteristic weapon: a soldier has a sword, while an archer has a bow. I’ll make a Sword struct and a Bow struct, and I’ll unite them under a Wieldable protocol:
protocol Wieldable { } struct Sword : Wieldable { } struct Bow : Wieldable { }
I’ll add a Weapon associated type to Fighter, which is constrained to be a Wieldable, and once again I’ll resolve it manually for each type of Fighter:
protocol Superfighter { associatedtype Weapon : Wieldable } protocol Fighter : Superfighter { associatedtype Enemy : Superfighter } struct Soldier : Fighter { typealias Weapon = Sword typealias Enemy = Archer } struct Archer : Fighter { typealias Weapon = Bow typealias Enemy = Soldier }
Now let’s say that every Fighter has the ability to steal his enemy’s weapon. I’ll give the Fighter generic protocol a steal(weapon:from:)
method. How can the Fighter generic protocol express the parameter types in a way that causes its adopter to declare this method with the proper types?
The from:
parameter type is this Fighter’s Enemy. We already know how to express that: it’s the placeholder plus dot-notation with the associated type name. Here, the placeholder is the adopter of this protocol — namely, Self
. So the from:
parameter type is Self.Enemy
. And what about the weapon:
parameter type? That’s the Weapon of that Enemy! So the weapon:
parameter type is Self.Enemy.Weapon
:
protocol Fighter : Superfighter { associatedtype Enemy : Superfighter func steal(weapon:Self.Enemy.Weapon, from:Self.Enemy) }
(We could omit Self
from that code, and it would still compile and would mean the same thing. But Self
would still be the implicit start of the chain, and I think it makes the meaning of the code clearer.)
The result is that the following declarations for Soldier and Archer correctly adopt the Fighter protocol, and the compiler approves:
struct Soldier : Fighter { typealias Weapon = Sword typealias Enemy = Archer func steal(weapon:Bow, from:Archer) { } } struct Archer : Fighter { typealias Weapon = Bow typealias Enemy = Soldier func steal (weapon:Sword, from:Soldier) { } }
The example is artificial, but the concept is not. The Swift headers make heavy use of associated type chains; the associated type chain Iterator.Element
is particularly common, because it expresses the type of the element of a sequence. (The Sequence generic protocol has an associated type Iterator, which is constrained to be an adopter of the generic IteratorProtocol, which in turn has an associated type Element.) Even longer associated type chains are not uncommon; for example, the LazyCollectionProtocol filter
method refers to a type Self.Elements.Iterator.Element
.
A simple type constraint limits the types eligible for resolving a placeholder to a single type. Sometimes, you want to limit the eligible resolving types still further: you want additional constraints.
In a generic protocol, the colon in an associated type constraint is effectively the same as the colon that appears in a type declaration. Thus, it can be followed by multiple protocols, or by a superclass and multiple protocols:
class Dog { } class FlyingDog : Dog, Flier { } protocol Flier { } protocol Walker { } protocol Generic { associatedtype T : Flier, Walker associatedtype U : Dog, Flier }
In the Generic protocol, the associated type T can be resolved only as a type that adopts the Flier protocol and the Walker protocol, and the associated type U can be resolved only as a type that is a Dog (or a subclass of Dog) and that adopts the Flier protocol.
In the angle brackets of a generic function or object type, that syntax is illegal. In the simple case where a type is to adopt more than one protocol, you can use protocol composition:
func flyAndWalk<T: Flier & Walker> (_ f:T) {}
More generally, you can append a where
clause, consisting of one or more comma-separated additional constraints on a declared placeholder:
func flyAndWalk<T> (_ f:T) where T:Flier, T:Walker {} // or T: Flier & Walker func flyAndWalk2<T> (_ f:T) where T:Flier, T:Dog {}
A where
clause can also impose additional constraints on the associated type of a generic protocol that already constrains a placeholder, using an associated type chain (described in the preceding section). This pseudocode shows what I mean; I’ve omitted the content of the where
clause, to focus on what the where
clause will be constraining:
protocol Flier { associatedtype Other } func flockTogether<T:Flier> (_ f:T) where T.Other /*???*/ {}
As you can see, the placeholder T is already constrained to be a Flier. Flier is itself a generic protocol, with an associated type Other. Thus, whatever type resolves T will resolve Other. The where
clause is going to constrain T.Other
; thus, it will constrain further the types eligible to resolve T, by restricting the types eligible to resolve Other.
So what sort of restriction are we allowed to impose on our associated type chain? One possibility is the same sort of restriction as in the preceding example — a colon followed by a protocol that it must adopt, or by a class that it must descend from. Here’s an example with a protocol:
protocol Flier { associatedtype Other } struct Bird : Flier { typealias Other = String } struct Insect : Flier { typealias Other = Bird } func flockTogether<T:Flier> (_ f:T) where T.Other:Equatable {}
Both Bird and Insect adopt Flier, but they are not both eligible as the argument in a call to the flockTogether
function. The flockTogether
function can be called with a Bird argument, because a Bird’s Other associated type is resolved to String, which adopts the built-in Equatable protocol. But flockTogether
can’t be called with an Insect argument, because an Insect’s Other associated type is resolved to Bird, which doesn’t adopt the Equatable protocol:
flockTogether(Bird()) // okay flockTogether(Insect()) // compile error
Instead of a colon, we can use an equality operator ==
followed by a type. The type at the end of the associated type chain must then be this exact type — not merely an adopter or subclass. For example:
protocol Flier { associatedtype Other } protocol Walker { } struct Kiwi : Walker { } struct Bird : Flier { typealias Other = Kiwi } struct Insect : Flier { typealias Other = Walker } func flockTogether<T:Flier> (_ f:T) where T.Other == Walker {}
The flockTogether
function can be called with an Insect argument, because Insect adopts Flier and resolves Other to Walker. But it can’t be called with a Bird argument. Bird adopts Flier, and it resolves Other to an adopter of Walker, namely Kiwi — but that isn’t good enough to satisfy the ==
restriction.
The type on the right side of the ==
operator can itself be an associated type chain. The resolved types at the ends of the two chains must then be identical. For example:
protocol Flier { associatedtype Other } struct Bird : Flier { typealias Other = String } struct Insect : Flier { typealias Other = Int } func flockTwoTogether<T:Flier, U:Flier> (_ f1:T, _ f2:U) where T.Other == U.Other {}
The flockTwoTogether
function can be called with a Bird and a Bird, and it can be called with an Insect and an Insect, but it can’t be called with an Insect and a Bird, because they don’t resolve the Other associated type to the same type.
The Swift header makes extensive use of where
clauses with an ==
operator, especially as a way of restricting a sequence type. Take, for example, the String append(contentsOf:)
method, declared like this:
mutating func append<S : Sequence>(contentsOf newElements: S) where S.Iterator.Element == Character
A String must consist of only Characters. The constraint means that a character sequence — but not a sequence of something else, such as Int — can be concatenated to a String:
var s = "hello" s.append(contentsOf: " world".characters) // "hello world"
The Array append(contentsOf:)
method is declared a little differently:
mutating func append<S : Sequence>(contentsOf newElements: S) where S.Iterator.Element == Element
An Array can consist of any type — but only one type. Array is a generic struct whose Element placeholder is the type of its elements. The constraint here enforces a rule that you can append to an Array the elements of any sort of Sequence, but only if they are the same kind of element as the elements of this array. If the array consists of String elements, you can add more String elements to it, but not Int elements.
An extension is a way of injecting your own code into an object type that has already been declared elsewhere; you are extending an existing object type. You can extend your own object types; you can also extend one of Swift’s object types or one of Cocoa’s object types, in which case you are adding functionality to a type that doesn’t belong to you!
Extension declaration can take place only at the top level of a file. To declare an extension, put the keyword extension
followed by the name of an existing object type, then optionally a colon plus the names of any protocols you want to add to the list of those adopted by this type, and finally curly braces containing the usual things that go inside an object type declaration — with the following restrictions:
In my real programming life, I sometimes extend a built-in Swift or Cocoa type just to encapsulate some missing functionality by expressing it as a property or method. Here are some examples from actual apps.
In a card game, I need to shuffle the deck, which is stored in an array. I extend Swift’s built-in Array type to give it a shuffle
method:
extension Array { mutating func shuffle () { for i in (0..<self.count).reversed() { let ix1 = i let ix2 = Int(arc4random_uniform(UInt32(i+1))) (self[ix1], self[ix2]) = (self[ix2], self[ix1]) } } }
Cocoa’s Core Graphics framework has many useful functions associated with the CGRect struct, and Swift already extends CGRect to add some helpful properties and methods; but there’s no shortcut for getting the center point (a CGPoint) of a CGRect, something that in practice one very often needs. I extend CGRect to give it a center
property:
extension CGRect { var center : CGPoint { return CGPoint(x:self.midX, y:self.midY) } }
An extension can declare a static or class member; since an object type is usually globally available, this can be a good way to slot a global function into an appropriate namespace. For example, in one of my apps, I find myself frequently using a certain color (a UIColor). Instead of creating that color repeatedly, it makes sense to encapsulate the instructions for generating it in a global function. But instead of making that function completely global, I make it — appropriately enough — a read-only class variable of UIColor:
extension UIColor { class var myGolden : UIColor { return self.init( red:1.000, green:0.894, blue:0.541, alpha:0.900 ) } }
Now I can use that color throughout my code as UIColor.myGolden
, completely parallel to built-in class properties such as UIColor.red
.
Another good use of an extension is to make built-in Cocoa classes work with your private data types. For example, in my Zotz app, I’ve defined an enum whose raw values are the key strings to be used when archiving or unarchiving a property of a Card:
enum Archive : String { case color = "itsColor" case number = "itsNumber" case shape = "itsShape" case fill = "itsFill" }
The only problem is that in order to use this enum when archiving, I have to take its rawValue
each time:
coder.encode(self.color, forKey:Archive.color.rawValue) coder.encode(self.number, forKey:Archive.number.rawValue) coder.encode(self.shape, forKey:Archive.shape.rawValue) coder.encode(self.fill, forKey:Archive.fill.rawValue)
That’s just ugly. An elegant fix (suggested in a WWDC 2015 video) is to teach NSCoder, the class of coder
, what to do when the forKey:
argument is an Archive instead of a String. In an extension, I overload the encode(_:forKey:)
method:
extension NSCoder { func encode(_ objv: Any?, forKey key: Archive) { self.encode(objv, forKey:key.rawValue) } }
In effect, I’ve moved the rawValue
call out of my code and into NSCoder’s code. Now I can archive a Card without saying rawValue
:
coder.encode(self.color, forKey:Archive.color) coder.encode(self.number, forKey:Archive.number) coder.encode(self.shape, forKey:Archive.shape) coder.encode(self.fill, forKey:Archive.fill)
Extensions on one’s own object types can help to organize one’s code. A frequently used convention is to add an extension for each protocol one’s object type needs to adopt, like this:
class ViewController: UIViewController { // ... UIViewController method overrides go here ... } extension ViewController : UIPopoverPresentationControllerDelegate { // ... UIPopoverPresentationControllerDelegate methods go here ... } extension ViewController : UIToolbarDelegate { // ... UIToolbarDelegate methods go here ... }
An extension on your own object type is also a way to spread your definition of that object type over multiple files, if you feel that several shorter files are better than one long file.
When you extend a Swift struct, a curious thing happens with initializers: it becomes possible to declare an initializer and keep the implicit initializers:
struct Digit { var number : Int } extension Digit { init() { self.init(number:42) } }
In that code, the explicit declaration of an initializer through an extension did not cause us to lose the implicit memberwise initializer, as would have happened if we had declared the same initializer inside the original struct declaration. Now we can instantiate a Digit by calling the explicitly declared initializer — Digit()
— or by calling the implicit memberwise initializer — Digit(number:7)
.
When you extend a protocol, you can add methods and properties to the protocol, just as for any object type. Unlike a protocol declaration, these methods and properties are not mere requirements, to be fulfilled by the adopter of the protocol; they are actual methods and properties, to be inherited by the adopter of the protocol! For example:
protocol Flier { } extension Flier { func fly() { print("flap flap flap") } } struct Bird : Flier { }
Observe that Bird can now adopt Flier without implementing the fly
method.
That’s because the Flier protocol extension supplies the fly
method! Bird thus inherits an implementation of fly
:
let b = Bird() b.fly() // flap flap flap
An adopter can provide its own alternative implementation of a method inherited from a protocol extension:
protocol Flier { } extension Flier { func fly() { print("flap flap flap") } } struct Insect : Flier { func fly() { print("whirr") } } let i = Insect() i.fly() // whirr
But be warned: this kind of inheritance is not polymorphic. The adopter’s implementation is not an override; it is merely another implementation. The internal identity rule does not apply; it matters how a reference is typed:
let f : Flier = Insect() f.fly() // flap flap flap
Even though f
is internally an Insect (as we can discover with the is
operator), the fly
message is being sent to an object reference typed as a Flier, so it is Flier’s implementation of the fly
method that is called, not Insect’s implementation.
To get something that looks like polymorphic inheritance, we must declare fly
as a requirement in the original protocol:
protocol Flier { func fly() // * } extension Flier { func fly() { print("flap flap flap") } } struct Insect : Flier { func fly() { print("whirr") } }
Now an Insect maintains its internal integrity:
let f : Flier = Insect() f.fly() // whirr
The chief benefit of protocol extensions is that they allow code to be moved to an appropriate scope. Here’s an example from my Zotz app. I have four enums, each representing an attribute of a Card: Fill, Color, Shape, and Number. They all have an Int raw value. I was tired of having to say rawValue:
every time I initialized one of these enums from its raw value, so I gave each enum a delegating initializer with no externalized parameter name, that calls the built-in init(rawValue:)
initializer:
enum Fill : Int { case empty = 1 case solid case hazy init?(_ what:Int) { self.init(rawValue:what) } } enum Color : Int { case color1 = 1 case color2 case color3 init?(_ what:Int) { self.init(rawValue:what) } } // ... and so on ...
However, I didn’t like the repetition of my initializer declaration. This initializer is shared by all four enums, so I’d like to write it once, as part of some type from which all four enums can inherit it. That sounds like a protocol extension! An enum with a raw value automatically adopts the built-in generic RawRepresentable protocol, where the raw value type is an associated type called RawValue. So I can shoehorn my initializer into the RawRepresentable protocol:
extension RawRepresentable { init?(_ what:RawValue) { self.init(rawValue:what) } } enum Fill : Int { case empty = 1 case solid case hazy } enum Color : Int { case color1 = 1 case color2 case color3 } // ... and so on ...
The Swift standard library makes heavy use of protocol extensions. Again, this is because protocol extensions allow code to be moved to an appropriate scope. The Swift standard library declares a lot of important protocols; protocol extensions allow those protocols to be given methods, like this:
extension Sequence { func enumerated() -> EnumeratedSequence<Self> }
Without protocol extensions, the only way to apply enumeration to a Sequence, and only to a Sequence, would be to declare a global generic function with a constraint restricting the parameter to Sequence adopters. (And before Swift 2.0, when protocol extensions were introduced, that’s exactly what was declared.)
When you extend a generic type, the placeholder type names are visible to your extension declaration. That’s good, because you might need to use them; but it can make your code a little mystifying, because you seem to be using an undefined type name out of the blue. It might be a good idea to add a comment, to remind yourself what you’re up to:
class Dog<T> { var name : T? } extension Dog { func sayYourName() -> T? { // T is the type of self.name return self.name } }
A generic type extension declaration can include a where
clause. This has the same effect as any generic constraint: it limits which resolvers of the generic can call the code injected by this extension, and assures the compiler that your code is legal for those resolvers.
As with protocol extensions, this means that a global function can be turned into a method. Recall this example from earlier in this chapter:
func myMin<T:Comparable>(_ things:T...) -> T { var minimum = things[0] for ix in 1..<things.count { if things[ix] < minimum { minimum = things[ix] } } return minimum }
That’s a global function. I’d prefer to inject it into Array as a method. Array is a generic struct whose placeholder type is called Element. To make this work, I need somehow to bring along the Comparable type constraint that makes this code legal; without it, as you remember, my use of <
won’t compile. I can do that with a where
clause:
extension Array where Element:Comparable { func myMin() -> Element { var minimum = self[0] for ix in 1..<self.count { if self[ix] < minimum { minimum = self[ix] } } return minimum } }
The where
clause is a constraint guaranteeing that this array’s elements adopt Comparable, so the compiler permits the use of the <
operator — and it doesn’t permit the myMin
method to be called on an array whose elements don’t adopt Comparable.
The Swift standard library makes heavy use of generic extensions. For example, in real life, there is already a min
method; myMin
isn’t needed. The min
method is a Sequence method — and it is declared just like myMin
, namely through an extension on the generic Sequence protocol with a constraint guaranteeing that the sequence’s elements are comparable:
extension Sequence where Iterator.Element : Comparable { func min() -> Self.Iterator.Element? }
An interesting problem arises when you want your generic extension where
clause to specify type equality (==
) instead of protocol adoption or class inheritance (:
). The problem is that you can’t do that with a generic struct. Suppose, for example, that I want to give Array a sum
method when the elements are Ints. I can’t do it:
extension Array where Element == Int { // compile error func sum() -> Int { return self.reduce(0, +) } }
But you can do it with a generic protocol, so the trick is to extend a generic protocol adopted by your struct. In this case, there is already a generic protocol adopted by Array, namely Sequence; so the solution is extend that instead:
extension Sequence where Iterator.Element == Int { func sum() -> Int { return self.reduce(0, +) } }
(I’ll discuss reduce
later in this chapter.)
Swift provides a few built-in types as general umbrella types, capable of embracing multiple real types under a single heading.
The Any type is the universal Swift umbrella type. Where an Any object is expected, absolutely any object or function can be passed, without casting:
func anyExpecter(_ a:Any) {} anyExpecter("howdy") // a struct instance anyExpecter(String.self) // a struct type anyExpecter(Dog()) // a class instance anyExpecter(Dog.self) // a class type anyExpecter(anyExpecter) // a function
Going the other way, of course, if you want to type an Any object as a more specific type, you will generally have to cast down. Such a cast is legal for any specific object type or function type. A forced cast isn’t safe, but you can easily make it safe, because you can also test an Any object against any specific object type or function type. Here, anything
is typed as Any:
if anything is String { let s = anything as! String // ... }
In Swift 3, the Any umbrella type is of great importance because it is the general medium of interchange between Swift and the Cocoa Objective-C APIs. When an Objective-C object type is nonspecific (Objective-C id
), it will appear to Swift as Any. Commonly encountered examples are UserDefaults, NSCoding, and key–value coding; all of these allow you to pass an object of indeterminate class along with a string key name, and they allow you to retrieve an object of indeterminate class by a string key name. That object is typed, in Swift, as Any (or as an Optional wrapping Any, so that it can be nil
).
For example:
let ud = UserDefaults.standard let s = "howdy" ud.set(s, forKey:"Test")
The first parameter of UserDefaults set(_:forKey:)
is typed as Any. Thus, Any functions as a general conduit for crossing the bridge between the Swift world and Cocoa’s Objective-C world.
However, merely casting or assigning to Any does not in fact cross the bridge there and then. Rather, the bridge will be crossed later, when Objective-C actually receives this value and has to do something with it. At that time, it will be transformed into an object type that Objective-C can deal with. Objective-C objects must be class types. Certain common Swift types are structs, which would be meaningless to Objective-C; therefore they are automatically bridged to Objective-C class types. For example, a String becomes an NSString, and an Int becomes an NSNumber. Nonclass types that are not automatically bridged are boxed up in a way that allows them to survive the journey into Objective-C’s world, even though Objective-C can’t do anything directly with such types.
Coming back the other way, if Objective-C hands you an Any object, you will need to cast it down to its underlying type in order to do anything useful with it:
let ud = UserDefaults.standard let test = ud.object(forKey:"Test") as! String
The result returned from UserDefaults object(forKey:)
is typed as Any — actually, as an Optional wrapping an Any, because UserDefaults might need to return nil
to indicate that no object exists for that key. But you know that it’s supposed to be a string, so you cast it down to String. Of course, you’d better be telling the truth when you cast down with as!
, or you will crash when the code runs and the cast turns out to be impossible. You can use the as?
and is
operators, if you’re in doubt, to make sure your cast is safe:
let ud = UserDefaults.standard let test = ud.object(forKey:"Test") as? String if test != nil { // ... }
AnyObject is an empty protocol (requiring no properties or methods) with the special feature that all class types conform to it automatically. Although Objective-C APIs present Objective-C id
as Any in Swift, Swift AnyObject is Objective-C id
. In Swift 3, AnyObject is useful primarily when you want to take advantage of the behavior of Objective-C id
, as I’ll demonstrate in a moment.
A class type can be assigned directly where an AnyObject is expected; to retrieve it as its original type, you’ll need to cast down:
class Dog { } let d = Dog() let anyo : AnyObject = d let d2 = anyo as! Dog
Assigning to an AnyObject requires crossing the bridge to Objective-C then and there. If you’re not starting with a class type, you must cast (with as
). If this type is automatically bridged to an Objective-C class type, it becomes that type; other types are boxed up in a way that allows them to survive the journey into Objective-C’s world, even though Objective-C can’t deal with them directly:
let s = "howdy" as AnyObject // String to NSString to AnyObject let i = 1 as AnyObject // Int to NSNumber to AnyObject let r = CGRect() as AnyObject // CGRect to boxed type to AnyObject
Thus we may imagine that when you hand a Swift object off to Objective-C as an Any value, as in the previous section, it later crosses the bridge by being cast, behind the scenes, to AnyObject.
Because AnyObject is Objective-C id
, it can be used, like Objective-C id
, to suspend the compiler’s judgment as to whether a certain message can be sent to an object. Thus, you can send a message to an AnyObject without bothering to cast down to its real type.
You can’t send just any old message to an AnyObject; the message must correspond to a class member that meets one of the following criteria:
@objc
.This feature is fundamentally parallel to optional protocol members, which I discussed earlier in this chapter. Let’s start with two classes:
class Dog { @objc var noise : String = "woof" @objc func bark() -> String { return "woof" } } class Cat {}
The Dog property noise
and the Dog method bark
are marked @objc
, so they are visible as potential messages to be sent to an AnyObject. To prove it, I’ll type a Cat as an AnyObject and send it one of these messages. Let’s start with the noise
property:
let c : AnyObject = Cat() let s = c.noise
That code, amazingly, compiles. Moreover, it doesn’t crash when the code runs! The noise
property has been typed as an Optional wrapping its original type. Here, that’s an Optional wrapping a String. If the object typed as AnyObject doesn’t implement noise
, the result is nil
and no harm done.
Now let’s try it with a method call:
let c : AnyObject = Cat() let s = c.bark?()
Again, that code compiles and is safe. If the Object typed as AnyObject doesn’t implement bark
, no bark()
call is performed; the method result type has been wrapped in an Optional, so s
is typed as String?
and has been set to nil
. If the AnyObject turns out to have a bark
method (for example, if it had been a Dog), the result is an Optional wrapping the returned String. If you call bark!()
on the AnyObject instead, the result will be a String, but you’ll crash if the AnyObject doesn’t implement bark
. Unlike an optional protocol member, you can even send the message with no unwrapping. This is legal:
let c : AnyObject = Cat() let s = c.bark()
That’s just like force-unwrapping the call: the result is a String, but it’s possible to crash.
Sometimes, what you want to know is not what type an object is, but whether an object itself is the particular object you think it is. This problem can’t arise with a value type, but it can arise with a reference type, where there can be more than one distinct reference to one and the same object. A class is a reference type, so the problem can arise with class instances.
Swift’s solution is the identity operator (===
). It is defined for operands whose type is AnyObject?
, and compares one object reference with another. It is not a comparison of values for equality, like the equality operator (==
); you’re asking whether two object references refer to one and the same object. There is also a negative version of the identity operator (!==
).
A typical use case is that a class instance arrives from Cocoa, and you need to know whether it is in fact a particular object to which you already have a reference. For example, a Notification has an object
property that helps identify the notification (usually, it is the original sender of the notification). We can use ===
to test whether this object
is the same as some object to which we already have a reference. However, object
is typed as Any in Swift 3 (actually, as an Optional wrapping Any), so we must cast to AnyObject?
in order to take advantage of the identity operator:
func changed(_ n:Notification) { let player = MPMusicPlayerController.applicationMusicPlayer() if n.object as AnyObject? === player { // ... } }
AnyClass is the type of AnyObject. It corresponds to the Objective-C Class type. It arises typically in declarations where a Cocoa API wants to say that a class is expected.
For example, the UIView layerClass
class property is declared, in its Swift translation, like this:
class var layerClass : AnyClass {get}
That means: if you override this class property, implement your getter to return a class. This will presumably be a CALayer subclass. To return an actual class in your implementation, send the self
message to the name of the class:
override class var layerClass : AnyClass { return CATiledLayer.self }
A reference to an AnyClass object behaves much like a reference to an AnyObject object. You can send it any Objective-C message that Swift knows about — any Objective-C class message. To illustrate, once again I’ll start with two classes:
class Dog { @objc static var whatADogSays : String = "woof" } class Cat {}
Objective-C can see whatADogSays
, and it sees it as a class property. Therefore you can send whatADogSays
to an AnyClass reference:
let c : AnyClass = Cat.self let s = c.whatADogSays
A reference to a class, such as you can obtain by applying type(of:)
to an object, or by sending self
to the type name, is of a type that adopts AnyClass, and you can compare references to such types with the ===
operator. In effect, this is a way of finding out whether two references to classes refer to the same class. This construct is valuable because you can’t use the is
operator when the thing on the right side is a type reference rather than a literal type name. For example:
func typeTester(_ d:Dog, _ whattype:Dog.Type) { if type(of:d) === whattype { // ... } }
The condition is true
only if d
is identically of type whattype
. For example, if Dog has a subclass NoisyDog, then the condition is true
if the parameters are Dog()
and Dog.self
, or if they are NoisyDog()
and NoisyDog.self
, but not if they are NoisyDog()
and Dog.self
.
Swift, in common with most modern computer languages, has built-in collection types Array and Dictionary, along with a third type, Set. Array and Dictionary are sufficiently important that the language accommodates them with some special syntax. At the same time, like most Swift types, they are quite thinly provided with related functions; some missing functionality is provided by Cocoa’s NSArray and NSDictionary, to which they are respectively bridged. The Set collection type is bridged to Cocoa’s NSSet.
An array (Array, a struct) is an ordered collection of object instances (the elements of the array) accessible by index number, where an index number is an Int numbered from 0
. Thus, if an array contains four elements, the first has index 0
and the last has index 3
. A Swift array cannot be sparse: if there is an element with index 3
, there is also an element with index 2
and so on.
The salient feature of Swift arrays is their strict typing. Unlike some other computer languages, a Swift array’s elements must be uniform — that is, the array must consist solely of elements of the same definite type. Even an empty array must have a definite element type, despite lacking elements at this moment. An array is itself typed in accordance with its element type. Arrays whose elements are of different types are considered, themselves, to be of two different types: an array of Int elements is of a different type from an array of String elements.
If all this reminds you of Optionals, it should. Like an Optional, a Swift array is a generic. It is declared as Array<Element>
, where the placeholder Element is the type of a particular array’s elements. And, like an Optional, Array types are covariant, meaning that they behave polymorphically in accordance with their element types: if NoisyDog is a subclass of Dog, then an array of NoisyDog can be used where an array of Dog is expected.
To declare or state the type of a given array’s elements, you could explicitly resolve the generic placeholder; an array of Int elements would thus be an Array<Int>
. However, Swift offers syntactic sugar for stating an array’s element type, using square brackets around the name of the element type, like this: [Int]
. That’s the syntax you’ll use most of the time.
A literal array is represented as square brackets containing a list of its elements separated by comma (and optional spaces): for example, [1,2,3]
. The literal for an empty array is empty square brackets: []
.
An array’s default initializer init()
, called by appending empty parentheses to the array’s type, yields an empty array of that type. Thus, you can create an empty array of Int like this:
var arr = [Int]()
Alternatively, if a reference’s type is known in advance, the empty array []
can be inferred to that type. Thus, you can also create an empty array of Int like this:
var arr : [Int] = []
If you’re starting with a literal array containing elements, you won’t usually need to declare the array’s type, because Swift will infer it by looking at the elements. For example, Swift will infer that [1,2,3]
is an array of Int. If the array element types consist of a class and its subclasses, like Dog and NoisyDog, Swift will infer the common superclass as the array’s type. However, in some cases you will need to declare an array reference’s type explicitly even while assigning a literal to that array:
let arr : [Any] = [1, "howdy"] // mixed bag let arr2 : [Flier] = [Insect(), Bird()] // protocol adopters
An array also has an initializer whose parameter is a sequence. This means that if a type is a sequence, you can split an instance of it into the elements of an array. For example:
Array(1...3)
generates the array of Int [1,2,3]
.Array("hey".characters)
generates the array of Character ["h","e","y"]
.Array(d)
, where d
is a Dictionary, generates an array of tuples of the key–value pairs of d
.Another array initializer, init(repeating:count:)
, lets you populate an array with the same value. In this example, I create an array of 100 Optional strings initialized to nil
:
let strings : [String?] = Array(repeating:nil, count:100)
That’s the closest you can get in Swift to a sparse array; we have 100 slots, each of which might or might not contain a string (and to start with, none of them do).
When you assign, pass, or cast an array of a certain type to another array type, you are really operating on the individual elements of the array. Thus, for example:
let arr : [Int?] = [1,2,3]
That code is actually syntactic sugar: assigning an array of Int where an array of Optionals wrapping Int is expected constitutes a request that each individual Int in the original array should be wrapped in an Optional. And that is exactly what happens:
let arr : [Int?] = [1,2,3] print(arr) // [Optional(1), Optional(2), Optional(3)]
Similarly, suppose we have a Dog class and its NoisyDog subclass; then this code is legal:
let dog1 : Dog = NoisyDog() let dog2 : Dog = NoisyDog() let arr = [dog1, dog2] let arr2 = arr as! [NoisyDog]
In third line, we have an array of Dog. In the fourth line, we apparently cast this array down to an array of NoisyDog — which really means that we cast each individual Dog in the first array to a NoisyDog (and we won’t crash when we do that, provided each element of the first array really is a NoisyDog).
Similarly, the as?
operator will cast an array to an Optional wrapping an array, which will be nil
if the requested cast cannot be performed for each element individually:
let dog1 : Dog = NoisyDog() let dog2 : Dog = NoisyDog() let dog3 : Dog = Dog() let arr = [dog1, dog2] let arr2 = arr as? [NoisyDog] // Optional wrapping an array of NoisyDog let arr3 = [dog2, dog3] let arr4 = arr3 as? [NoisyDog] // nil
Finally, you can test each element of an array with the is
operator by testing the array itself. For example, given the array of Dog from the previous code, you can say:
if arr is [NoisyDog] { // ...
That will be true
if each element of the array is in fact a NoisyDog.
Array equality works just as you would expect: two arrays are equal if they contain the same number of elements and all the elements are pairwise equal in order:
let i1 = 1 let i2 = 2 let i3 = 3 let arr : [Int] = [1,2,3] if arr == [i1,i2,i3] { // they are equal!
Two arrays don’t have to be of the same type to be compared against one another for equality, but the test won’t succeed unless they do in fact contain objects that are equal to one another. Here, I compare a Dog array against a NoisyDog array; this is legal if equatability is defined for two Dogs. (For example, Dog might be an NSObject subclass; or you might make Dog adopt Equatable, as I’ll explain in Chapter 5.) The two arrays are in fact equal, because the dogs they contain are the same dogs in the same order:
let nd1 = NoisyDog() let d1 = nd1 as Dog let nd2 = NoisyDog() let d2 = nd2 as Dog if [d1,d2] == [nd1,nd2] { // they are equal!
Because an array is a struct, it is a value type, not a reference type. This means that every time an array is assigned to a variable or passed as argument to a function, it is effectively copied. I do not mean to imply, however, that merely assigning or passing an array is expensive, or that a lot of actual copying takes place every time. If the reference to an array is a constant, clearly no copying is necessary; and even operations that yield a new array derived from another array, or that mutate an array, may be quite efficient. You just have to trust that the designers of Swift have thought about these problems and have implemented arrays efficiently behind the scenes.
Although an array itself is a value type, its elements are treated however those elements would normally be treated. In particular, an array of class instances, assigned to multiple variables, results in multiple references to the same instances.
The Array struct implements subscript methods to allow access to elements using square brackets after a reference to an array. You can use an Int inside the square brackets. For example, in an array consisting of three elements, if the array is referred to by a variable arr
, then arr[1]
accesses the second element.
You can also use a Range of Int inside the square brackets. For example, if arr
is an array with three elements, then arr[1...2]
signifies the second and third elements. Technically, an expression like arr[1...2]
yields something called an ArraySlice. However, an ArraySlice is very similar to an array; for example, you can subscript an ArraySlice in just the same ways you would subscript an array, and an ArraySlice can be passed where an array is expected.
In general, therefore, you will probably pretend that an ArraySlice is an array. However, they are not the same thing. An ArraySlice is not a new object; it’s a way of pointing into a section of the original array. For this reason, its index numbers are those of the original array. For example:
let arr = ["manny", "moe", "jack"] let slice = arr[1...2] print(slice[1]) // moe
The ArraySlice slice
consists of "moe"
and "jack"
— and these are not merely "moe"
and "jack"
taken from the original array, but the "moe"
and "jack"
in the original array. For this reason, their index numbers are 1 and 2, just as in the original array. If you want to extract a new array based on this slice, coerce the slice to an Array:
let arr2 = Array(slice) print(arr2[1]) // jack
If the reference to an array is mutable (var
, not let
), then a subscript expression can be assigned to. This alters what’s in that slot. Of course, what is assigned must accord with the type of the array’s elements:
var arr = [1,2,3] arr[1] = 4 // arr is now [1,4,3]
If the subscript is a range, what is assigned must be a slice. You can assign a literal array, because it will be cast for you to an ArraySlice; but if what you’re starting with is an array reference, you’ll have to cast it to a slice yourself. Such assignment can change the length of the array being assigned to:
var arr = [1,2,3] arr[1..<2] = [7,8] // arr is now [1,7,8,3] arr[1..<2] = [] // arr is now [1,8,3] arr[1..<1] = [10] // arr is now [1,10,8,3] (no element was removed!) let arr2 = [20,21] // arr[1..<1] = arr2 // compile error! You have to say this: arr[1..<1] = ArraySlice(arr2) // arr is now [1,20,21,10,8,3]
It is a runtime error to access an element by a number larger than the largest element number or smaller than the smallest element number. If arr
has three elements, speaking of arr[-1]
or arr[3]
is not illegal linguistically, but your program will crash.
It is legal for the elements of an array to be arrays. For example:
let arr = [[1,2,3], [4,5,6], [7,8,9]]
That’s an array of arrays of Int. Its type declaration, therefore, is [[Int]]
. (No law says that the contained arrays have to be the same length; that’s just something I did for clarity.)
To access an individual Int inside those nested arrays, you can chain subscript operations:
let arr = [[1,2,3], [4,5,6], [7,8,9]] let i = arr[1][1] // 5
If the outer array reference is mutable, you can also write into a nested array:
var arr = [[1,2,3], [4,5,6], [7,8,9]] arr[1][1] = 100
You can modify the inner arrays in other ways as well; for example, you can insert additional elements into them.
An array is a Collection, which is itself a Sequence. If those terms have a familiar ring, they should: the same is true of a String’s characters
, which I called a character sequence in Chapter 3. For this reason, an array and a character sequence bear some striking similarities to one another.
As a collection, an array’s count
read-only property reports the number of elements it contains. If an array’s count
is 0
, its isEmpty
property is true
.
An array’s first
and last
read-only properties return its first and last elements, but they are wrapped in an Optional because the array might be empty and so these properties would need to be nil
. This is one of those rare situations in Swift where you can wind up with an Optional wrapping an Optional. For example, consider an array of Optionals wrapping Ints, and what happens when you get the last
property of such an array.
An array’s largest accessible index is one less than its count
. You may find yourself calculating index values with reference to the count
; for example, to refer to the last two elements of arr
, you can say:
let arr = [1,2,3] let slice = arr[arr.count-2...arr.count-1] // [2,3]
Swift doesn’t adopt the modern convention of letting you use negative numbers as a shorthand for that calculation. On the other hand, for the common case where you want the last n
elements of an array, you can use the suffix(_:)
method:
let arr = [1,2,3] let slice = arr.suffix(2) // [2,3]
Both suffix(_:)
and its companion prefix(_:)
yield ArraySlices, and have the remarkable feature that there is no penalty for going out of range:
let arr = [1,2,3] let slice = arr.suffix(10) // [1,2,3] (and no crash)
Instead of describing the size of the suffix or prefix by its count, you can express the limit of the suffix or prefix by its index:
let arr = [1,2,3] let slice = arr.suffix(from:1) // [2,3] let slice2 = arr.prefix(upTo:1) // [1] let slice3 = arr.prefix(through:1) // [1,2]
An array’s startIndex
property is 0
, and its endIndex
property is its count
. An array’s indices
property is a half-open range whose endpoints are the array’s startIndex
and endIndex
— that is, a range accessing the entire array. Moreover, these values are Ints, so you can use ordinary arithmetic operations on them:
let arr = [1,2,3] let slice = arr[arr.endIndex-2..<arr.endIndex] // [2,3]
But the startIndex
, endIndex
, and indices
of an ArraySlice are measured against the original array; for example, after the previous code, slice.startIndex
is 1.
The index(of:)
method reports the index of the first occurrence of an element in an array, but it is wrapped in an Optional so that nil
can be returned if the element doesn’t appear in the array. If the array consists of Equatables, the comparison uses ==
behind the scenes to identify the element being sought:
let arr = [1,2,3] let ix = arr.index(of:2) // Optional wrapping 1
Alternatively, you can call index(where:)
, supplying your own function that takes an element type and returns a Bool, and you’ll get back the index of the first element for which that Bool is true
. In this example, my Bird struct has a name
String property:
let aviary = [Bird(name:"Tweety"), Bird(name:"Flappy"), Bird(name:"Lady")] let ix = aviary.index {$0.name.characters.count < 5} // Optional(2)
If what you want is not the index but the object itself, the first(where:)
method returns it — wrapped, naturally, in an Optional.
As a sequence, an array’s contains(_:)
method reports whether it contains an element.
Again, you can rely on the ==
operator if the elements are Equatables, or you can supply your own function that takes an element type and returns a Bool:
let arr = [1,2,3] let ok = arr.contains(2) // true let ok2 = arr.contains {$0 > 3} // false
The starts(with:)
method reports whether an array’s starting elements match the elements of a given sequence of the same type. Once more, you can rely on the ==
operator for Equatables, or you can supply a function that takes two values of the element type and returns a Bool stating whether they match:
let arr = [1,2,3] let ok = arr.starts(with:[1,2]) // true let ok2 = arr.starts(with:[1,-2]) {abs($0) == abs($1)} // true
The elementsEqual(_:)
method is the sequence generalization of array comparison: the two sequences must be of the same length, and either their elements must be Equatables or you can supply a matching function.
The min
and max
methods return the smallest or largest element in an array, wrapped in an Optional in case the array is empty. If the array consists of Comparables, you can let the <
operator do its work; alternatively, you can call min(by:)
or max(by:)
, supplying a function that returns a Bool stating whether the smaller of two given elements is the first:
let arr = [3,1,-2] let min = arr.min() // Optional(-2) let min2 = arr.min {abs($0)<abs($1)} // Optional(1)
If the reference to an array is mutable, the append(_:)
and append(contentsOf:)
instance methods add elements to the end of it. The difference between them is that append(_:)
takes a single value of the element type, while append(contentsOf:)
takes a sequence of the element type. For example:
var arr = [1,2,3] arr.append(4) arr.append(contentsOf:[5,6]) arr.append(contentsOf:7...8) // arr is now [1,2,3,4,5,6,7,8]
The +
operator is overloaded to behave like append(contentsOf:)
(not append(_:)
!) when the left-hand operand is an array, except that it generates a new array, so it works even if the reference to the array is a constant (let
). If the reference to the array is mutable (var
), you can append to it in place with the +=
operator. Thus:
let arr = [1,2,3] let arr2 = arr + [4] // arr2 is now [1,2,3,4] var arr3 = [1,2,3] arr3 += [4] // arr3 is now [1,2,3,4]
If the reference to an array is mutable, the instance method insert(at:)
inserts a single element at the given index. To insert multiple elements at once, call the insert(contentsOf:at:)
method. Assignment into a range-subscripted array, which I described earlier, is even more flexible.
If the reference to an array is mutable, the instance method remove(at:)
removes the element at that index; the instance method removeLast
removes the last element. These methods also return the value that was removed from the array; you can ignore the returned value if you don’t need it. These methods do not wrap the returned value in an Optional, and accessing an out-of-range index will crash your program. On the other hand, popLast
does wrap the returned value in an Optional, and is thus safe even if the array is empty.
Similar to removeLast
and popLast
are removeFirst
and popFirst
. Alternate forms removeFirst(_:)
and removeLast(_:)
allow you to specify how many elements to remove, but return no value; they, too, can crash if there aren’t that many elements. popFirst
, remarkably, operates on a slice, not an array. This is presumably for the sake of efficiency: all it has to do is increase the slice’s startIndex
(whereas with an array, the whole array must be renumbered).
Alternatively, or if the reference is not mutable, you can use the dropFirst
and dropLast
methods to return a slice with the end element removed. Again, you can supply a parameter stating how many elements to drop. And again, there is no penalty for dropping too many elements; you simply end up with an empty slice.
The joined(separator:)
instance method starts with an array of arrays. It extracts their individual elements, and interposes between each sequence of extracted elements the elements of the separator:
. The result is an intermediate sequence called a JoinSequence, and might have to be coerced further to an Array if that’s what you were after. For example:
let arr = [[1,2], [3,4], [5,6]] let joined = Array(arr.joined(separator:[10,11])) // [1, 2, 10, 11, 3, 4, 10, 11, 5, 6]
Calling joined()
with no separator:
is a way to flatten an array of arrays. Again, it returns an intermediate sequence (or collection), so you might want to coerce to an Array:
let arr = [[1,2], [3,4], [5,6]] let arr2 = Array(arr.flatten()) // [1, 2, 3, 4, 5, 6]
The reversed
instance method yields a new array whose elements are in the opposite order from the original.
The sort
and sorted
instance methods respectively sort the original array (if the reference to it is mutable) and yield a new sorted array based on the original. Once again, you get two choices: if this is an array of Comparables, you can let the <
operator dictate the new order; alternatively, you can call sort(by:)
or sorted(by:)
, supplying a function that takes two parameters of the element type and returns a Bool stating whether the first parameter should be ordered before the second (just like min
and max
). For example:
var arr = [4,3,5,2,6,1] arr.sort() // [1, 2, 3, 4, 5, 6] arr.sort {$0 > $1} // [6, 5, 4, 3, 2, 1]
In that last line, I provided an anonymous function. Alternatively, of course, you can pass as argument the name of a declared function. In Swift, comparison operators are the names of functions! Therefore, I can do the same thing like this:
var arr = [4,3,5,2,6,1] arr.sort(by: >) // [6, 5, 4, 3, 2, 1]
The split
instance method breaks an array into an array of slices at elements matching the parameter, if you call split(separator:)
, or at elements that pass a specified test, if you call split(isSeparator:
); in the latter, the parameter is a function that takes a value of the element type and returns a Bool. The separator elements themselves are eliminated:
let arr = [1,2,3,4,5,6] let arr2 = arr.split {$0 % 2 == 0} // split at evens: [[1], [3], [5]]
An array is a sequence, and so you can enumerate it, inspecting or operating with each element in turn. The simplest way is by means of a for...in
loop; I’ll have more to say about this construct in Chapter 5:
let pepboys = ["Manny", "Moe", "Jack"] for pepboy in pepboys { print(pepboy) // prints Manny, then Moe, then Jack }
Alternatively, you can use the forEach(_:)
instance method. Its parameter is a function that takes an element of the array (or other sequence) and returns no value. Think of it as the functional equivalent of the imperative for...in
loop:
let pepboys = ["Manny", "Moe", "Jack"] pepboys.forEach {print($0)} // prints Manny, then Moe, then Jack
If you need the index numbers as well as the elements, call the enumerated
instance method and loop on the result; what you get on each iteration is a tuple:
let pepboys = ["Manny", "Moe", "Jack"] for (ix,pepboy) in pepboys.enumerated() { print("Pep boy \(ix) is \(pepboy)") // Pep boy 0 is Manny, etc. } // or: pepboys.enumerated().forEach {print("Pep boy \($0.0) is \($0.1)")}
Swift also provides some powerful array transformation instance methods. Like forEach(_:)
, these methods all enumerate the array for you, so that the loop is buried implicitly inside the method call, making your code tighter and cleaner.
Let’s start with the filter(_:)
instance method. It yields a new array, each element of which is an element of the old array, in the same order; but some of the elements of the old array may be omitted — they were filtered out. What filters them out is a function that you supply; it accepts a parameter of the element type and returns a Bool stating whether this element should go into the new array. For example:
let pepboys = ["Manny", "Moe", "Jack"] let pepboys2 = pepboys.filter {$0.hasPrefix("M")} // [Manny, Moe]
The map(_:)
instance method yields a new array, each element of which is the result of passing the corresponding element of the old array through a function that you supply. This function accepts a parameter of the element type and returns a result which may be of some other type; Swift can usually infer the type of the resulting array elements by looking at the type returned by the function.
For example, here’s how to multiply every element of an array by 2:
let arr = [1,2,3] let arr2 = arr.map {$0 * 2} // [2,4,6]
Here’s another example, to illustrate the fact that map(_:)
can yield an array with a different element type:
let arr = [1,2,3] let arr2 = arr.map {Double($0)} // [1.0, 2.0, 3.0]
Here’s a real-life example showing how neat and compact your code can be when you use map(_:)
. In order to remove all the table cells in a section of a UITableView, I have to specify the cells as an array of IndexPath objects. If sec
is the section number, I can form those IndexPath objects individually like this:
let path0 = IndexPath(row:0, section:sec) let path1 = IndexPath(row:1, section:sec) // ...
Hmmm, I think I see a pattern here! I could generate my array of IndexPath objects by looping through the row values using for...in
. But with map(_:)
, there’s a much tighter way to express the same loop (ct
is the number of rows in the section):
let paths = Array(0..<ct).map {IndexPath(row:$0, section:sec)}
Actually, map(_:)
is a Collection instance method — and a Range is itself a Collection. Therefore, I don’t need the Array coercion:
let paths = (0..<ct).map {IndexPath(row:$0, section:sec)}
The map(_:)
method has a specialized companion, flatMap(_:)
. Applied to an array, flatMap(_:)
first calls map(_:)
, and then does one of two oddly unrelated things to the resulting array, depending on its type:
If the map function produces an array of arrays, flatMap(_:)
flattens the inner arrays. For instance, [[1],[2]].flatMap{$0}
is [1,2]
. Here’s a more interesting example:
let arr = [[1, 2], [3, 4]] let arr2 = arr.flatMap{$0.map{String($0)}} // ["1", "2", "3", "4"]
First we coerce the individual elements of each inner array to a string, thus yielding an array of arrays of String. That’s an array of arrays, so flatMap(_:)
flattens it, and we end up with a simple array of String.
If the map function produces an array of Optionals, flatMap(_:)
safely unwraps them, eliminating any nil
elements. For example:
let arr : [Any] = [1, "hey", 2, "ho"] let arr2 = arr.flatMap{$0 as? String} // ["hey", "ho"]
First we map the original array to an array of Optionals wrapping String: [nil, Optional("hey"), nil, Optional("ho")]
. Then flatMap(_:)
unwraps each element safely, resulting in an array of String; the nil
elements are filtered out. This neatly solves a class of problem that arises surprisingly often.
Finally, we come to the reduce
instance method. If you’ve learned LISP or Scheme, you’re probably accustomed to reduce
; otherwise, it can be a bit mystifying at first. It’s a way of combining all the elements of an array (actually, a sequence) into a single value. This value’s type — the result type — doesn’t have to be the same as the array’s element type. You supply, as the second parameter, a function that takes two parameters; the first is of the result type, the second is of the element type, and the result is the combination of those two parameters, as the result type. The result of each iteration becomes the function’s first parameter in the next iteration, along with the next element of the array as the second parameter. Thus, the output of combining pairs accumulates, and the final accumulated value is the final output of the function. However, that doesn’t explain where the first parameter for the first iteration comes from. The answer is that you have to supply it as the first argument of the reduce
call.
That will all be easier to understand with a simple example. Let’s assume we’ve got an array of Int. Then we can use reduce
to sum all the elements of the array. Here’s some pseudocode where I’ve left out the first argument of the call, so that you can think about what it needs to be:
let sum = arr.reduce(/*???*/) {$0 + $1}
Each pair of parameters will be added together to get the first parameter on the next iteration. The second parameter on every iteration is an element of the array. So the question is, what should the first element of the array be added to? We want the actual sum of all the elements, no more and no less; so clearly the first element of the array should be added to 0
! So here’s actual working code:
let arr = [1, 4, 9, 13, 112] let sum = arr.reduce(0) {$0 + $1} // 139
The +
operator is the name of a function of the required type, so here’s another way to write the same thing:
let sum = arr.reduce(0, +)
In my real iOS programming life, I depend heavily on these methods, often using two or even all three of them together, nested or chained or both. Here’s an example. I have a table view that displays data divided into sections. Under the hood, the data is an array of arrays of String — a [[String]]
— where each subarray represents the rows of a section. Now I want to filter that data to eliminate all strings that don’t contain a certain substring. I want to keep the sections intact, but if removing strings removes all of a section’s strings, I want to eliminate that section array entirely.
The heart of the action is the test for whether a string contains a substring. I’m going to use a Cocoa method for that, in part because it lets me do a case-insensitive search. If s
is a string from my array, and target
is the substring we’re looking for, then the code for looking to see whether s
contains target
case-insensitively is as follows:
let found = s.range(of:target, options:.caseInsensitive)
Recall the discussion of range(of:)
in Chapter 3. If found
is not nil
, the substring was found. Here, then, is the actual code, preceded by some sample data for exercising it:
let arr = [["Manny", "Moe", "Jack"], ["Harpo", "Chico", "Groucho"]] let target = "m" let arr2 = arr.map { $0.filter { let found = $0.range(of:target, options:.caseInsensitive) return (found != nil) } }.filter {$0.count > 0}
After the first two lines, setting up the sample data, what remains is a single statement — a map
call, whose function consists of a filter
call, with a filter
call chained to it. If that code doesn’t prove to you that Swift is cool, nothing will.
When you’re programming iOS, you import the Foundation framework (or UIKit, which imports Foundation) and thus the Objective-C NSArray type. Swift’s Array is bridged to Objective-C’s NSArray. The most general medium of array interchange is [Any]
; if an Objective-C API specifies an NSArray, with no further type information, Swift will see this as an array of Any. This reflects the fact that Objective-C’s rules for what can be an element of an NSArray are looser than Swift’s: the elements of an NSArray do not all have to be of the same type. On the other hand, the elements of an Objective-C NSArray must be Objective-C objects — that is, they must be class types.
Passing a Swift array to Objective-C is thus usually easy. Typically, you’ll just pass the array, either by assignment or as an argument in a function call:
let arr = [UIBarButtonItem(), UIBarButtonItem()] self.navigationItem.leftBarButtonItems = arr
The objects that you pass as elements of the array will cross the bridge to Objective-C in the usual way. For example:
let lay = CAGradientLayer() lay.locations = [0.25, 0.5, 0.75]
CAGradientLayer’s locations
property is typed as an array of NSNumber. But we can pass Double values directly, because Double is bridged to NSNumber.
On the other hand, if a Swift type can’t be seen usefully by Objective-C, automatic crossing of the bridge isn’t going to do you any good. For example, in this code, anim
is a CAKeyframeAnimation, and points
is a Swift array of CGPoint:
let points = [oldP,p1,p2,newP] // [CGPoint] anim.values = points
That’s legal, but it isn’t going to work. A CGPoint is a struct, which isn’t an object in Objective-C, so this array is effectively opaque to Objective-C; Objective-C knows that this is an NSArray containing four objects, but it can’t extract points from those objects and do anything with them. It is up to you to wrap these CGPoints as objects. In particular, you must do with them exactly what you would do if you were writing this code in Objective-C — you must put them inside NSValue wrappers:
let points = [oldP,p1,p2,newP] anim.values = points.map {NSValue(cgPoint:$0)}
To call an NSArray method on a Swift array, you may have to cast to NSArray:
let arr = ["Manny", "Moe", "Jack"] let s = (arr as NSArray).componentsJoined(by:", ") // s is "Manny, Moe, Jack"
A Swift Array seen through a var
reference is mutable, but an NSArray isn’t mutable no matter how you see it. For mutability in Objective-C, you need an NSMutableArray, a subclass of NSArray. You can’t cast, assign, or pass a Swift array as an NSMutableArray; you have to coerce. The best way is to call the NSMutableArray initializer init(array:)
, to which you can pass a Swift array directly. To convert back from an NSMutableArray to a Swift array, you can cast:
var arr = ["Manny", "Moe", "Jack"] let arr2 = NSMutableArray(array:arr) arr2.remove("Moe") arr = arr2 as NSArray as! [String]
Now let’s talk about what happens when an NSArray arrives from Objective-C into Swift. There won’t be any problem crossing the bridge: the NSArray will arrive safely as a Swift Array. But a Swift Array of what?
Of itself, an NSArray carries no information about what type of element it contains. Starting in Xcode 7, however, the Objective-C language was modified so that the declaration of an NSArray, NSDictionary, or NSSet — the three collection types that are bridged to Swift — can include element type information. (Objective-C calls this a lightweight generic.) Thus, for the most part, the arrays you receive from Cocoa will be correctly typed.
For example, this elegant code was previously impossible:
let arr = UIFont.familyNames.map { UIFont.fontNamesForFamilyName($0) }
The result is an array of arrays of String, listing all available fonts grouped by family. That code is possible because both of those UIFont class methods are now seen by Swift as returning an array of String. Previously, those arrays were untyped, and casting down to an array of String was up to you.
On the other hand, lightweight generics are not always present. You might read an array from a .plist file stored on disk with NSArray’s initializer init(contentsOf:)
; you might retrieve an array from UserDefaults; you might even be dealing with an Objective-C API that hasn’t been updated to use lightweight generics. In such a situation, you’re going to end up with a plain vanilla NSArray or a Swift array of Any. If that happens, then usually you will want to cast down or otherwise transform this array into an array of some specific Swift type. Here’s an Objective-C class containing a method whose return type of NSArray hasn’t been marked up with an element type:
@implementation Pep - (NSArray*) boys { return @[@"Manny", @"Moe", @"Jack"]; } @end
To call that method and do anything useful with the result, it will be necessary to cast that result down to an array of String. If I’m sure of my ground, I can force the cast:
let p = Pep() let boys = p.boys() as! [String]
As with any cast, though, be sure you don’t lie! An Objective-C array can contain more than one type of object. Don’t force such an array to be cast down to a type to which not all the elements can be cast, or you’ll crash when the cast fails; you’ll need a more deliberate strategy for eliminating or otherwise transforming the problematic elements.
A dictionary (Dictionary, a struct) is an unordered collection of object pairs. In each pair, the first object is the key; the second object is the value. The idea is that you use a key to access a value. Keys are usually strings, but they don’t have to be; the formal requirement is that they be types that are Equatable and also Hashable, meaning that they implement an Int hashValue
property such that equal keys have equal hash values. Thus, the hash values can be used behind the scenes for rapid key access. Swift numeric types, strings, and enums are Hashables.
As with arrays, a given dictionary’s types must be uniform. The key type and the value type don’t have to be the same as one another, and they often will not be. But within any dictionary, all keys must be of the same type, and all values must be of the same type. Formally, a dictionary is a generic, and its placeholder types are ordered key type, then value type: Dictionary<Key,Value>
. As with arrays, however, Swift provides syntactic sugar for expressing a dictionary’s type, which is what you’ll usually use: [Key: Value]
. That’s square brackets containing a colon (and optional spaces) separating the key type from the value type. This code creates an empty dictionary whose keys (when they exist) will be Strings and whose values (when they exist) will be Strings:
var d = [String:String]()
The colon is used also between each key and value in the literal syntax for expressing a dictionary. The key–value pairs appear between square brackets, separated by comma, just like an array. This code creates a dictionary by describing it literally (and the dictionary’s type of [String:String]
is inferred):
var d = ["CA": "California", "NY": "New York"]
The literal for an empty dictionary is square brackets containing just a colon: [:]
. This notation can be used provided the dictionary’s type is known in some other way. Thus, this is another way to create an empty [String:String]
dictionary:
var d : [String:String] = [:]
If you try to fetch a value through a nonexistent key, there is no error, but Swift needs a way to report failure; therefore, it returns nil
. This, in turn, implies that the value returned when you successfully access a value through a key must be an Optional wrapping the real value!
Access to a dictionary’s contents is usually by subscripting. To fetch a value by key, subscript the key to the dictionary reference:
let d = ["CA": "California", "NY": "New York"] let state = d["CA"]
Bear in mind, however, that after that code, state
is not a String — it’s an Optional wrapping a String! Forgetting this is a common beginner mistake.
If the reference to a dictionary is mutable, you can also assign into a key subscript expression. If the key already exists, its value is replaced. If the key doesn’t already exist, it is created and the value is attached to it:
var d = ["CA": "California", "NY": "New York"] d["CA"] = "Casablanca" d["MD"] = "Maryland" // d is now ["MD": "Maryland", "NY": "New York", "CA": "Casablanca"]
Alternatively, call updateValue(forKey:)
; it has the advantage that it returns the old value wrapped in an Optional, or nil
if the key wasn’t already present.
By a kind of shorthand, assigning nil
into a key subscript expression removes that key–value pair if it exists:
var d = ["CA": "California", "NY": "New York"] d["NY"] = nil // d is now ["CA": "California"]
Alternatively, call removeValue(forKey:)
; it has the advantage that it returns the removed value before it removes the key–value pair. The removed value is returned wrapped in an Optional, so a nil
result tells you that this key was never in the dictionary to begin with.
As with arrays, a dictionary type is legal for casting down, meaning that the individual elements will be cast down. Typically, only the value types will differ:
let dog1 : Dog = NoisyDog() let dog2 : Dog = NoisyDog() let d = ["fido": dog1, "rover": dog2] let d2 = d as! [String : NoisyDog]
As with arrays, is
can be used to test the actual types in the dictionary, and as?
can be used to test and cast safely. Dictionary equality, like array equality, works as you would expect.
A dictionary has a count
property reporting the number of key–value pairs it contains, and an isEmpty
property reporting whether that number is 0
.
A dictionary has a keys
property reporting all its keys, and a values
property reporting all its values. They are effectively opaque structs (a LazyMapCollection, if you must know), but when you enumerate them with for...in
, you get the expected type:
var d = ["CA": "California", "NY": "New York"] for s in d.keys { print(s) // s is a String }
A dictionary is unordered! You can enumerate it (or its keys, or its values), but do not expect the elements to arrive in any particular order.
You can extract all a dictionary’s keys or values at once, by coercing the keys
or values
property to an array:
var d = ["CA": "California", "NY": "New York"] var keys = Array(d.keys)
You can also enumerate a dictionary itself. As you might expect from what I’ve already said, each iteration provides a key–value tuple:
var d = ["CA": "California", "NY": "New York"] for (abbrev, state) in d { print("\(abbrev) stands for \(state)") }
You can extract a dictionary’s entire contents at once as an array (of key–value tuples) by coercing the dictionary to an array:
var d = ["CA": "California", "NY": "New York"] let arr = Array(d) // [("NY", "New York"), ("CA", "California")]
Like an array, a dictionary and its keys
property and its values
property are Collections and Sequences. Therefore, everything I said about arrays as Collections and Sequences in the previous section is applicable! For example, if a dictionary d
has Int values, you can sum them with the reduce
instance method:
let sum = d.values.reduce(0, +)
You can obtain its smallest value (wrapped in an Optional):
let min = d.values.min()
You can list the values that match some criterion:
let arr = Array(d.values.filter{$0 < 2})
(The coercion to Array is needed because the sequence that results from calling filter
is lazy: there isn’t really anything in it until we enumerate it or collect it into an array.) You can sort the keys:
let keysSorted = d.keys.sorted()
The Foundation framework dictionary type is NSDictionary, and Swift’s Dictionary type is bridged to it. The untyped API characterization of an NSDictionary will be [AnyHashable:Any]
(AnyHashable is a type eraser, meaning that we can cope with the possibility that the keys will be of different hashable types).
Like NSArray, NSDictionary key and value types can now be marked in Objective-C. The most common key type in a real-life Cocoa NSDictionary is NSString, so you might well receive an NSDictionary as a [String:Any]
. Specific typing of an NSDictionary’s values, however, is much rarer; dictionaries that you pass to and receive from Cocoa will very often have values of multiple types. It is not at all surprising to have a dictionary whose keys are strings but whose values include a string, a number, a color, and an array. For this reason, you will usually not cast down the entire dictionary’s type; instead, you’ll work with the dictionary as having Any values, and cast when fetching an individual value from the dictionary. Since the value returned from subscripting a key is itself an Optional, you will typically unwrap and cast the value as a standard single move.
Here’s an example. A Cocoa Notification object comes with a userInfo
property. It is an NSDictionary that might itself be nil
, so the Swift API characterizes it as [AnyHashable:Any]?
. Let’s say I’m expecting this dictionary to be present and to contain a "progress"
key whose value is an NSNumber containing a Double. My goal is to extract that NSNumber and assign the Double that it contains to a property, self.progress
. Here’s one way to do that safely, using optional unwrapping and optional casting (n
is the Notification object):
let prog = (n.userInfo?["progress"] as? NSNumber)?.doubleValue if prog != nil { self.progress = prog! }
That’s an Optional chain that ends by fetching an NSNumber’s doubleValue
property, so prog
is implicitly typed as an Optional wrapping a Double. The code is safe, because if there is no userInfo
dictionary, or if it doesn’t contain a "progress"
key, or if that key’s value isn’t an NSNumber, nothing happens, and prog
will be nil
. I then test prog
to see whether it is nil
; if it isn’t, I know that it’s safe to force-unwrap it, and that the unwrapped value is the Double I’m after.
(In Chapter 5 I’ll describe another syntax for accomplishing the same goal, using conditional binding.)
Conversely, here’s a typical example of creating a dictionary and handing it off to Cocoa. This dictionary is a mixed bag: its values are a UIFont, a UIColor, and an NSShadow. Its keys are all strings, which I obtain as constants from Cocoa. I form the dictionary as a literal and pass it, all in one move, with no need to cast anything (titleTextAttributes
is typed as an Optional wrapping a [String:Any]
):
UINavigationBar.appearance().titleTextAttributes = [ NSFontAttributeName : UIFont(name: "ChalkboardSE-Bold", size: 20)!, NSForegroundColorAttributeName : UIColor.darkText, NSShadowAttributeName : { let shad = NSShadow() shad.shadowOffset = CGSize(width:1.5,height:1.5) return shad }() ]
As with NSArray and NSMutableArray, if you want Cocoa to mutate a dictionary, you must coerce to NSDictionary’s subclass NSMutableDictionary. In this example, I want to do a join between two dictionaries, so I harness the power of NSMutableDictionary, which has an addEntries(from:)
method:
var d1 = ["NY":"New York", "CA":"California"] let d2 = ["MD":"Maryland"] let mutd1 = NSMutableDictionary(dictionary:d1) mutd1.addEntries(from:d2) d1 = mutd1 as NSDictionary as! [String:String] // d1 is now ["MD": "Maryland", "NY": "New York", "CA": "California"]
That sort of thing is needed quite often, because there’s no native Swift method for adding the elements of one dictionary to another dictionary. Indeed, native utility methods involving dictionaries in Swift are disappointingly thin on the ground: there really aren’t any. Still, Cocoa and the Foundation framework are right there, so perhaps Apple feels there’s no point duplicating in the Swift standard library the functionality that already exists in Foundation. If having to drop into Cocoa bothers you, you can write your own library; for example, addEntries(from:)
is easily reimplemented as a Swift Dictionary instance method through an extension:
extension Dictionary { mutating func addEntries(from d:[Key:Value]) { for (k,v) in d { self[k] = v } } }
A set (Set, a struct) is an unordered collection of unique objects. Its elements must be all of one type; it has a count
and an isEmpty
property; it can be initialized from any sequence; you can cycle through its elements with for...in
. But the order of elements is not guaranteed, and you should make no assumptions about it.
The uniqueness of set elements is implemented by constraining their type to be Equatable and Hashable, just like the keys of a Dictionary. Thus, the hash values can be used behind the scenes for rapid access. Checking whether a set contains a given element, which you can do with the contains(_:)
instance method, is very efficient — far more efficient than doing the same thing with an array. Therefore, if element uniqueness is acceptable (or desirable) and you don’t need indexing or a guaranteed order, a set can be a much better choice of collection than an array.
The fact that a set’s elements are Hashables means that they must also be Equatables. This makes sense, because the notion of uniqueness depends upon being able to answer the question of whether a given object is already in the set.
There are no set literals in Swift, but you won’t need them because you can pass an array literal where a set is expected. There is no syntactic sugar for expressing a set type, but the Set struct is a generic, so you can express the type by explicitly specializing the generic:
let set : Set<Int> = [1, 2, 3, 4, 5]
In that particular example, however, there was no need to specialize the generic, as the Int type can be inferred from the array.
It sometimes happens (more often than you might suppose) that you want to examine one element of a set as a kind of sample. Order is meaningless, so it’s sufficient to obtain any element, such as the first element. For this purpose, use the first
instance property; it returns an Optional, just in case the set is empty.
The distinctive feature of a set is the uniqueness of its objects. If an object is added to a set and that object is already present, it isn’t added a second time. Conversion from an array to a set and back to an array is thus a quick and reliable way of uniquing the array — though of course order is not preserved:
let arr = [1,2,1,3,2,4,3,5] let set = Set(arr) let arr2 = Array(set) // [5, 2, 3, 1, 4], perhaps
A set is a Collection and a Sequence, so it is analogous to an array or a dictionary, and what I have already said about those types generally applies to a set as well. For example, Set has a map(_:)
instance method; it returns an array, but of course you can turn that right back into a set if you need to:
let set : Set = [1,2,3,4,5] let set2 = Set(set.map {$0+1}) // {6, 5, 2, 3, 4}, perhaps
If the reference to a set is mutable, a number of instance methods spring to life. You can add an object with insert(_:)
; there is no penalty for trying to add an object that’s already in the set. (To learn what happened, capture and examine the result of the insert
call.) You can remove an object and return it by specifying the object itself, or something equatable to it, with the remove(_:)
method; it returns the object wrapped in an Optional, or nil
if the object was not present. You can remove and return the first object, whatever “first” may mean, with removeFirst
; it crashes if the set is empty, so take precautions — or use popFirst
, which is safe.
Equality comparison (==
) is defined for sets as you would expect; two sets are equal if every element of each is also an element of the other.
If the notion of a set brings to your mind visions of Venn diagrams from elementary school, that’s good, because sets have instance methods giving you all those set operations you remember so fondly. The parameter can be a set, or it can be any sequence, which will be converted to a set; for example, it might be an array, a range, or even a character sequence:
intersection(_:)
, formIntersection(_:)
union(_:)
, formUnion(_:)
symmetricDifference(_:)
, formSymmetricDifference(_:)
subtracting(_:)
, subtract(_:)
isSubset(of:)
, isStrictSubset(of:)
isSuperset(of:)
, isStrictSuperset(of:)
false
if the two sets consist of the same elements.isDisjoint(with:)
Here’s a real-life example of elegant Set usage from one of my apps. I have a lot of numbered pictures, of which we are to choose one randomly. But I don’t want to choose a picture that has recently been chosen. Therefore, I keep a list of the numbers of all recently chosen pictures. When it’s time to choose a new picture, I convert the list of all possible numbers to a Set, convert the list of recently chosen picture numbers to a Set, and subtract(_:)
to get a list of unused picture numbers! Now I choose a picture number at random and add it to the list of recently chosen picture numbers:
let ud = UserDefaults.standard var recents = ud.object(forKey:Defaults.recents) as? [Int] if recents == nil { recents = [] } var forbiddenNumbers = Set(recents!) let legalNumbers = Set(1...PIXCOUNT).subtracting(forbiddenNumbers) let newNumber = Array(legalNumbers)[ Int(arc4random_uniform(UInt32(legalNumbers.count))) ] forbiddenNumbers.insert(newNumber) ud.set(Array(forbiddenNumbers), forKey:Defaults.recents)
An option set (OptionSet struct) is Swift’s way of treating as a struct a certain type of Cocoa enumeration. It is not, strictly speaking, a Set; but it is deliberately set-like, sharing common features with Set through the SetAlgebra protocol. Thus, an option set has contains(_:)
, insert(_:)
, and remove(_:)
methods, along with all the various set operation methods.
The purpose of option sets is to help you grapple with Objective-C bitmasks. A bitmask is an integer whose bits are used as switches when multiple options are to be specified simultaneously. Such bitmasks are very common in Cocoa. In Objective-C, bitmasks are manipulated through the arithmetic bitwise-or and bitwise-and operators. Such manipulation can be mysterious and error-prone. Thanks to option sets, bitmasks can be manipulated through set operations instead.
For example, when specifying how a UIView is to be animated, you are allowed to pass an options:
argument whose value comes from the UIViewAnimationOptions enumeration, whose definition (in Objective-C) begins as follows:
typedef NS_OPTIONS(NSUInteger, UIViewAnimationOptions) { UIViewAnimationOptionLayoutSubviews = 1 << 0, UIViewAnimationOptionAllowUserInteraction = 1 << 1, UIViewAnimationOptionBeginFromCurrentState = 1 << 2, UIViewAnimationOptionRepeat = 1 << 3, UIViewAnimationOptionAutoreverse = 1 << 4, // ... };
Pretend that an NSUInteger is 8 bits (it isn’t, but let’s keep things simple and short). Then this enumeration means that (in Swift) the following name–value pairs are defined:
UIViewAnimationOptions.layoutSubviews 0b00000001 UIViewAnimationOptions.allowUserInteraction 0b00000010 UIViewAnimationOptions.beginFromCurrentState 0b00000100 UIViewAnimationOptions.repeat 0b00001000 UIViewAnimationOptions.autoreverse 0b00010000
These values can be combined into a single value — a bitmask — that you pass as the options:
argument for your animation. All Cocoa has to do to understand your intentions is to look to see which bits in the value that you pass are set to 1. So, for example, 0b00011000
would mean that UIViewAnimationOptions.repeat
and UIViewAnimationOptions.autoreverse
are both true (and that the others are all false).
The question is how to form the value 0b00011000
in order to pass it. You could form it directly as a literal and set the options:
argument to UIViewAnimationOptions(rawValue:0b00011000)
; but that’s not a very good idea, because it’s error-prone and makes your code incomprehensible. In Objective-C, you’d use the arithmetic bitwise-or operator, analogous to this Swift code:
let val = UIViewAnimationOptions.autoreverse.rawValue | UIViewAnimationOptions.repeat.rawValue let opts = UIViewAnimationOptions(rawValue: val)
The UIViewAnimationOptions type, however, is an option set struct (because it is marked as NS_OPTIONS
in Objective-C), and therefore can be treated much like a Set. For example, given a UIViewAnimationOptions value, you can add an option to it using insert(_:)
:
var opts = UIViewAnimationOptions.autoreverse opts.insert(.repeat)
Alternatively, you can start with an array literal, just as if you were initializing a Set:
let opts : UIViewAnimationOptions = [.autoreverse, .repeat]
To indicate that no options are to be set, pass an empty option set ([]
) or, where permitted, omit the options:
parameter altogether.
The inverse situation is that Cocoa hands you a bitmask, and you want to know whether a certain bit is set. In this example from a UITableViewCell subclass, the cell’s state
comes to us as a bitmask; we want to know about the bit indicating that the cell is showing its edit control. You could do this by extracting the raw values and using the bitwise-and operator:
override func didTransition(to state: UITableViewCellStateMask) { let editing = UITableViewCellStateMask.showingEditControlMask.rawValue if state.rawValue & editing != 0 { // ... the ShowingEditControlMask bit is set ... } }
That’s a tricky formula, all too easy to get wrong. But this is an option set, so the contains(_:)
method tells you the answer:
override func didTransition(to state: UITableViewCellStateMask) { if state.contains(.showingEditControlMask) { // ... the ShowingEditControlMask bit is set ... } }
Swift’s Set type is bridged to Objective-C NSSet. The untyped medium of interchange is Set<AnyHashable>
. Coming back from Objective-C, if Objective-C doesn’t know what this is a set of, you would probably cast down as needed. As with NSArray, however, NSSet can be marked up to indicate its element type, in which case no casting will be necessary:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { let t = touches.first // an Optional wrapping a UITouch // ... }