As a courtesy, this is a full free rendering of my book, Programming iOS 6, by Matt Neuburg. Copyright 2013 Matt Neuburg. Please note that this edition is outdated; the current books are iOS 13 Programming Fundamentals with Swift and Programming iOS 13. If my work has been of help to you, please consider purchasing one or both of them, or you can reward me through PayPal at http://www.paypal.me/mattneub. Thank you!
A device may contain hardware for sensing the world around itself — where it is located, how it is oriented, how it is moving.
Information about the device’s current location and how that location is changing over time, using its Wi-Fi, cellular networking, and GPS capabilities, along with information about the device’s orientation relative to north, using its magnetometer, is provided through the Core Location framework. You’ll link to CoreLocation.framework and import <CoreLocation/CoreLocation.h>
.
Information about the device’s change in speed and attitude using its accelerometer is provided through the UIEvent class (for device shake) and the Core Motion framework, which provides increased accuracy by incorporating the device’s gyroscope, if it has one, as well as the magnetometer; you’ll link to CoreMotion.framework and import <CoreMotion/CoreMotion.h>
.
One of the major challenges associated with writing code that takes advantage of the sensors is that not all devices have all of this hardware. If you don’t want to impose stringent restrictions on what devices your app will run on in the first place (UIRequiredDeviceCapabilities
in Info.plist), your code must be prepared to fail gracefully and possibly provide a subset of its full capabilities when it discovers that the current device lacks certain features. Moreover, certain sensors may experience momentary inadequacy; for example, Core Location might not be able to get a fix on the device’s position because it can’t see cell towers, GPS satellites, or both. Also, some sensors take time to “warm up,” so that the values you’ll get from them initially will be invalid. You’ll want to respond to such changes in the external circumstances, in order to give the user a decent experience of your application regardless.
Core Location provides facilities for the device to determine and report its location (location services). It takes advantage of three sensors:
Core Location will automatically use whatever facilities the device does have; all you have to do is ask for the device’s location. Core Location allows you to specify how accurate a position fix you want; more accurate fixes may require more time.
The notion of a location is encapsulated by the CLLocation class and its properties, which include:
coordinate
altitude
speed
heading
horizontalAccuracy
In addition to the sensor-related considerations I mentioned a moment ago, use of Core Location poses the following challenges:
To use Core Location and location services directly, you need a location manager — a CLLocationManager instance. Use of a location manager typically operates along the following lines:
You’ll confirm that the desired services are available. CLLocationManager class methods let you find out whether the user has switched on the device’s location services as a whole (locationServicesEnabled
), whether the user has authorized this app to use location services (authorizedStatus
), and whether a particular service is available.
If location services are switched off, you can start using a location manager anyway, as a way of getting the runtime to present the dialog asking the user to switch them on. Be prepared, though, for the possibility that the user won’t do so. You can modify the body of this alert by setting the “Privacy — Location Usage Description” key (NSLocationUsageDescription
) in your app’s Info.plist (superseding the location manager’s pre–iOS 6 purpose
property) to tell the user why you want to access the database. This is a kind of “elevator pitch”; you need to persuade the user in very few words.
You’ll configure the location manager. For example, set its desiredAccuracy
if you don’t need best possible accuracy; it might be sufficient for your purposes to know very quickly but very roughly the device’s location (and recall that highest accuracy may also cause the highest battery drain). The accuracy setting is not a filter: the location manager will still send you whatever location information it has, and checking a location’s horizontalAccuracy
is then up to you.
The location manager’s distanceFilter
lets you specify that you don’t need a location report unless the device has moved a certain distance since the previous report. This can help keep you from being bombarded with events you don’t need. Other configuration settings depend on the particular service you’re asking for, as I’ll explain later.
startUpdatingLocation
. The location manager, in turn, will begin calling the appropriate delegate method repeatedly; in the case of startUpdatingLocation
, it’s locationManager:didUpdateToLocation:fromLocation:
. Your delegate will also always implement locationManager:didFailWithError:
, to receive error messages. You’ll deal with each delegate method call in turn. Remember to call the corresponding stop...
method when you no longer need delegate method calls.
As a simple example, we’ll turn on location services manually, just long enough to see if we can determine our position. We begin by ascertaining that location services are in fact available and that we have or can get authorization. If all is well, we instantiate CLLocationManager, set ourselves as the delegate, configure the location manager, set some instance variables so we can track what’s happening, and call startUpdatingLocation
to turn on location services:
BOOL ok = [CLLocationManager locationServicesEnabled]; if (!ok) { NSLog(@"oh well"); return; } CLAuthorizationStatus auth = [CLLocationManager authorizationStatus]; if (auth == kCLAuthorizationStatusRestricted || auth == kCLAuthorizationStatusDenied) { NSLog(@"sigh"); return; } CLLocationManager* lm = [CLLocationManager new]; self.locman = lm; self.locman.delegate = self; self.locman.desiredAccuracy = kCLLocationAccuracyBest; self.locman.purpose = @"This app would like to tell you where you are."; self.startTime = [NSDate date]; // now self.gotloc = NO; [self.locman startUpdatingLocation];
If something goes wrong, such as the user refusing to authorize this app, we’ll just turn location services back off:
- (void)locationManager:(CLLocationManager *)manager didFailWithError:(NSError *)error { NSLog(@"error: %@", [error localizedDescription]); // e.g., if user refuses to authorize... // ..."The operation couldn't be completed." [manager stopUpdatingLocation]; }
If things don’t go wrong, we’ll be handed our location as soon as it is determined. In this case, I’ve decided to demand accuracy better than 70 meters. If I don’t get it, I wait for the next location, but I also compare each location’s timestamp to the timestamp I created at the outset, so that I won’t wait forever for an accuracy that might never arrive. If I get the desired accuracy within the desired time, I turn off location services and am ready to use the location information:
- (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { if (!self.gotloc && ([newLocation.timestamp timeIntervalSinceDate:self.startTime] > 20)) { NSLog(@"this is just taking too long"); [self.locman stopUpdatingLocation]; return; } CLLocationAccuracy acc = newLocation.horizontalAccuracy; NSLog(@"%f", acc); if (acc > 70) return; // wait for better accuracy // if we get here, we have an accurate location [manager stopUpdatingLocation]; self.gotloc = YES; // ... and now we could do something with newLocation ... }
The first time that app runs, the log messages chart the increasing accuracy of the location reports. You can see that it was worth waiting a few seconds to get better accuracy:
2013-02-09 09:02:29.569 p718p736location[407:707] 45383.659065 2013-02-09 09:02:31.358 p718p736location[407:707] 1413.314191 2013-02-09 09:02:32.154 p718p736location[407:707] 163.886905 2013-02-09 09:02:36.137 p718p736location[407:707] 10.000000
Core Location will also use the GPS to determine which way and how quickly the device is moving. This information, if available, is returned automatically as part of a CLLocation object in locationManager:didUpdateToLocation:fromLocation:
, through its speed
and course
properties. For information about the device’s heading (which way is north), see the next section.
You can also use Core Location when your app is not in the foreground. There are two quite different ways to do this. The first is that your app can run in the background. Use of Core Location in the background is similar to production and recording of sound in the background (Chapter 27): you set the UIBackgroundModes
key of your app’s Info.plist, giving it a value of location
. This tells the system that if you have turned on location services and the user clicks the Home button, your app should not be suspended, the use of location services should continue, and your delegate should keep receiving Core Location events. Background use of location services can cause a power drain, but if you want your app to function as a positional data logger, for instance, it may be the only way; you can also help conserve power by making judicious choices, such as setting a coarse distanceFilter
value and not requiring high accuracy. Starting in iOS 6, Core Location can operate in deferred mode (allowDeferredLocationUpdatesUntilTraveled:timeout:
) so that your background app doesn’t receive updates until the user has moved a specified amount or until a fixed time interval has elapsed; this, too, can help conserve power, as the device may be able to power down some its sensors temporarily.
The second way of using of Core Location without being in the foreground doesn’t even require your app to be running. You do not have to set the UIBackgroundModes
of your Info.plist. You register with the system to receive a certain kind of notification, and when such a notification arrives, your app will be launched if it isn’t running. There are two notifications of this kind:
significantLocationChangeMonitoringAvailable
is YES, you can call startMonitoringSignificantLocationChanges
. The delegate’s locationManager:didUpdateToLocation:fromLocation:
will be called when the device’s location has changed significantly.
If regionMonitoringAvailable
and regionMonitoringEnabled
are YES, you can call startMonitoringForRegion:
or startMonitoringForRegion:desiredAccuracy:
for each region in which you are interested. Regions are collected as an NSSet, which is the location manager’s monitoredRegions
. A region is a CLRegion, initialized with initCircularRegionWithCenter:radius:identifier:
; the identifier
serves as a unique key, so that if you start monitoring for a region whose identifier matches that of a region already in the monitoredRegions
set, the latter will be ejected from the set. The following delegate methods may be called:
locationManager:didEnterRegion:
locationManager:didExitRegion:
locationManager:monitoringDidFailForRegion:withError:
For example, a reminder alarm uses region monitoring to notify the user when approaching or leaving a specific place (geofencing), as shown in Chapter 32.
Both significant location monitoring and region monitoring use cell tower position to estimate the device’s location. Since the cell is probably working anyway — for example, the device is a phone, so the cell is always on and is always concerned with what cell towers are available — little or no additional power is required. Apple says that the system will also take advantage of other clues (requiring no extra battery drain) to decide that there may have been a change in location: for example, the device may observe a change in the available Wi-Fi networks, strongly suggesting that the device has moved.
As I’ve already mentioned, notifications for location monitoring and region monitoring can arrive even if your app isn’t in the foreground. In that case, there are two possible states in which your app might find itself when an event arrives:
application:didFinishLaunchingWithOptions:
with an NSDictionary containing UIApplicationLaunchOptionsLocationKey
, thus allowing it to discern the special nature of the situation. At this point you probably have no location manager — your app has just launched from scratch. So you should get yourself a location manager and start up location services for long enough to receive the normal delegate event.
For appropriately equipped devices, Core Location also supports use of the magnetometer to determine which way the device is facing (its heading). Although this information is accessed through a location manager, you do not need location services to be turned on, nor your app to be authorized, merely to use the magnetometer to report the device’s orientation with respect to magnetic north; but you do need those things in order to report true north, as this depends on the device’s location.
As with location, you’ll first check that the desired feature is available (headingAvailable
); then you’ll instantiate and configure the location manager, and call startUpdatingHeading
. The delegate will be sent locationManager:didUpdateHeading:
. Heading values are reported as a CLHeading; recall that this involves degrees (not radians) clockwise from the reference direction.
In this example, I’ll use the device as a compass. The headingFilter
setting is to prevent us from being bombarded constantly with readings. For best results, the device should probably be held level (like a tabletop, or a compass); the reported heading will be the direction in which the top of the device (the end away from the Home button) is pointing:
BOOL ok = [CLLocationManager headingAvailable]; if (!ok) { NSLog(@"drat"); return; } CLLocationManager* lm = [CLLocationManager new]; self.locman = lm; self.locman.delegate = self; self.locman.headingFilter = 3; self.locman.headingOrientation = CLDeviceOrientationPortrait; [self.locman startUpdatingHeading];
In the delegate, I’ll display our magnetic heading as a rough cardinal direction in a label in the interface (lab
):
- (void) locationManager:(CLLocationManager *)manager didUpdateHeading:(CLHeading *)newHeading { CGFloat h = newHeading.magneticHeading; __block NSString* dir = @"N"; NSArray* cards = @[@"N", @"NE", @"E", @"SE", @"S", @"SW", @"W", @"NW"]; [cards enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) { if (h < 45.0/2.0 + 45*idx) { dir = obj; *stop = YES; } }]; if (self.lab.hidden) self.lab.hidden = NO; if (![self.lab.text isEqualToString:dir]) self.lab.text = dir; NSLog(@"%f %@", h, dir); }
In that code, I asked only for the heading’s magneticHeading
. I can freely ask for its trueHeading
, but the resulting value will be invalid (a negative number) unless we are also receiving location updates.
(Combining the magnetometer with the compass interface we developed in Chapter 16 and Chapter 17, so as to simulate a physical compass, is left as an exercise for the reader.)
Acceleration results from the application of a force to the device, and is detected through the device’s accelerometer, supplemented by the gyroscope if it has one. Gravity is a force, so the accelerometer always has something to measure, even if the user isn’t consciously applying a force to the device; thus the device can report its attitude relative to the vertical.
Acceleration information can arrive in two ways:
A shake event is a UIEvent (Chapter 18). Receiving shake events is rather like receiving remote events (Chapter 27), involving the notion of the first responder. To receive shake events, your app must contain a UIResponder which:
canBecomeFirstResponder
This responder, or a UIResponder further up the responder chain, should implement some or all of these methods:
motionBegan:withEvent:
motionEnded:withEvent:
motionBegan:withEvent:
is over and has turned out to be a shake.
motionCancelled:withEvent:
motionBegan:withEvent:
wasn’t a shake after all.
Thus, it might be sufficient to implement motionEnded:withEvent:
, because this arrives if and only if the user performs a shake gesture. The first parameter will be the event subtype, but at present this is guaranteed to be UIEventSubtypeMotionShake
, so testing it is pointless.
The view controller in charge of the current view is a good candidate to receive shake events. Thus, a minimal implementation might look like this:
- (BOOL) canBecomeFirstResponder { return YES; } - (void) viewDidAppear: (BOOL) animated { [super viewDidAppear: animated]; [self becomeFirstResponder]; } - (void)motionEnded:(UIEventSubtype)motion withEvent:(UIEvent *)event { NSLog(@"hey, you shook me!"); }
By default, if the first responder is of a type that supports undo (such as an NSTextField), and if motionBegan:withEvent:
is sent up the responder chain, and if you have not set the shared UIApplication’s applicationSupportsShakeToEdit
property to NO, a shake will be handled through an Undo or Redo alert. Your view controller might not want to rob any responders in its view of this capability. A simple way to prevent this is to test whether the view controller is itself the first responder; if it isn’t, we call super
to pass the event on up the responder chain:
- (void)motionEnded:(UIEventSubtype)motion withEvent:(UIEvent *)event { if ([self isFirstResponder]) NSLog(@"hey, you shook me!"); else [super motionEnded:motion withEvent:event]; }
If the device has an accelerometer but no gyroscope, you can learn about the forces being applied to it, but some compromises will be necessary. The chief problem is that, even if the device is completely motionless, its acceleration values will constitute a normalized vector pointing toward the center of the earth, popularly known as gravity. The accelerometer is thus constantly reporting a combination of gravity and user-induced acceleration. This is good and bad. It’s good because it means that, with certain restrictions, you can use the accelerometer to detect the device’s attitude in space. It’s bad because gravity values and user-induced acceleration values are mixed together. Fortunately, there are ways to separate these values mathematically:
In some situations, it is desirable to apply both a low-pass filter and a high-pass filter, so as to learn both the gravity values and the user acceleration values. A common additional technique is to run the output of the high-pass filter itself through a low-pass filter to reduce noise and small twitches. Apple provides some nice sample code for implementing a low-pass or a high-pass filter; see especially the AccelerometerGraph example, which is also very helpful for exploring how the accelerometer behaves.
The technique of applying filters to the accelerometer output has some serious downsides, which are inevitable in a device that lacks a gyroscope:
There are actually two ways to read the raw accelerometer values: UIAccelerometer and Core Motion. UIAccelerometer is slated for deprecation, and its delegate method is in fact deprecated, so I’ll describe how to read the raw accelerometer values with Core Motion. The technique is really a subset of how you read any values with Core Motion; in some ways it is similar to how you use Core Location:
start
method.
Poll the motion manager whenever you want data, asking for the appropriate data
property. This step is surprising; you probably expected that the motion manager would call into a delegate, but in fact a motion manager has no delegate. The polling interval doesn’t have to be the same as the motion manager’s update interval; when you poll, you’ll obtain the motion manager’s current data — that is, the data generated by its most recent update, whenever that was.
If your app’s purpose is to collect all the data, then instead of calling a start
method, you can call a start...UpdatesToQueue:withHandler:
method and receive callbacks in a block, possibly on a background thread, managed by an NSOperationQueue (Chapter 38); but this is an advanced technique and you aren’t likely to need it, so I’m not going to talk about it.
stop
method when you no longer need data.
In this example, I will simply report whether the device is lying flat on its back. I start by creating and configuring my motion manager, and I launch a repeating timer to trigger polling:
self.motman = [CMMotionManager new]; if (!self.motman.accelerometerAvailable) { NSLog(@"oh well"); return; } self.motman.accelerometerUpdateInterval = 1.0 / 30.0; [self.motman startAccelerometerUpdates]; self.timer = [NSTimer scheduledTimerWithTimeInterval:self.motman.accelerometerUpdateInterval target:self selector:@selector(pollAccel:) userInfo:nil repeats:YES];
My pollAccel:
method is now being called repeatedly. In pollAccel:
, I ask the motion manager for its accelerometer data. This arrives as a CMAccelerometerData, which is a timestamp plus a CMAcceleration; a CMAcceleration is simply a struct of three values, one for each axis of the device, measured in Gs. The positive x-axis points to the right of the device. The positive y-axis points toward the top of the device, away from the Home button. The positive z-axis points out of the screen toward the user.
The two axes orthogonal to gravity, which are the x and y axes when the device is lying more or less on its back, are much more accurate and sensitive to small variation than the axis pointing toward or away from gravity. So our approach is to ask first whether the x and y values are close to zero; only then do we use the z value to learn whether the device is on its back or on its face. To keep from updating our interface constantly, we implement a crude state machine; the state (an instance variable) starts out at -1
, and then switches between 0
(device on its back) and 1
(device not on its back), and we update the interface only when there is a state change:
CMAccelerometerData* dat = self.motman.accelerometerData; CMAcceleration acc = dat.acceleration; CGFloat x = acc.x; CGFloat y = acc.y; CGFloat z = acc.z; CGFloat accu = 0.08; // feel free to experiment with this value if (fabs(x) < accu && fabs(y) < accu && z < -0.5) { if (state == -1 || state == 1) { state = 0; self.label.text = @"I'm lying on my back... ahhh..."; } } else { if (state == -1 || state == 0) { state = 1; self.label.text = @"Hey, put me back down on the table!"; } }
This works, but it’s sensitive to small motions of the device on the table. To damp this sensitivity, we can run our input through a low-pass filter. The low-pass filter code comes straight from Apple’s own examples, and involves maintaining the previously filtered reading as a set of instance variables:
-(void)addAcceleration:(CMAcceleration)accel { double alpha = 0.1; self->oldX = accel.x * alpha + self->oldX * (1.0 - alpha); self->oldY = accel.y * alpha + self->oldY * (1.0 - alpha); self->oldZ = accel.z * alpha + self->oldZ * (1.0 - alpha); }
Our polling code now starts out by passing the data through the filter:
CMAccelerometerData* dat = self.motman.accelerometerData; CMAcceleration acc = dat.acceleration; [self addAcceleration: acc]; CGFloat x = self->oldX; CGFloat y = self->oldY; CGFloat z = self->oldZ; // ... and the rest is as before ...
In this next example, the user is allowed to slap the side of the device against an open hand — perhaps as a way of telling it to go to the next or previous image or whatever it is we’re displaying. We pass the acceleration input through a high-pass filter to eliminate gravity (again, the filter code comes straight from Apple’s examples):
-(void)addAcceleration:(CMAcceleration)accel { double alpha = 0.1; self->oldX = accel.x - ((accel.x * alpha) + (self->oldX * (1.0 - alpha))); self->oldY = accel.y - ((accel.y * alpha) + (self->oldY * (1.0 - alpha))); self->oldZ = accel.z - ((accel.z * alpha) + (self->oldZ * (1.0 - alpha))); }
What we’re looking for, in our polling routine, is a high positive or negative x
value. A single slap is likely to consist of several consecutive readings above our threshold, but we want to report each slap only once, sο we take advantage of the timestamp attached to a CMAccelerometerData, maintaining the timestamp of our previous high reading as an instance variable and ignoring readings that are too close to one another in time. Another problem is that a sudden jerk involves both an acceleration (as the user starts the device moving) and a deceleration (as the device stops moving); thus a left slap might be preceded by a high value in the opposite direction, which we might interpret wrongly as a right slap. We can compensate crudely, at the expense of some latency, with delayed performance (the report:
method simply logs to the console):
CMAccelerometerData* dat = self.motman.accelerometerData; CMAcceleration acc = dat.acceleration; [self addAcceleration: acc]; CGFloat x = self->oldX; CGFloat thresh = 1.0; if ((x < -thresh) || (x > thresh)) NSLog(@"%f", x); if (x < -thresh) { if (dat.timestamp - self->oldTime > 0.5 || self->lastSlap == 1) { self->oldTime = dat.timestamp; self->lastSlap = -1; [NSObject cancelPreviousPerformRequestsWithTarget:self]; [self performSelector:@selector(report:) withObject:@"left" afterDelay:0.5]; } } if (x > thresh) { if (dat.timestamp - self->oldTime > 0.5 || self->lastSlap == -1) { self->oldTime = dat.timestamp; self->lastSlap = 1; [NSObject cancelPreviousPerformRequestsWithTarget:self]; [self performSelector:@selector(report:) withObject:@"right" afterDelay:0.5]; } }
The gesture we’re detecting is a little tricky to make: the user must slap the device into an open hand and hold it there; if the device jumps out of the open hand, that movement may be detected as the last in the series, resulting in the wrong report (left instead of right, or vice versa). And the latency of our gesture detection is very high; here’s a typical successful detection of a leftward slap:
2012-02-13 12:03:18.673 p724p742smackMe[4024:707] -1.204655 2012-02-13 12:03:18.743 p724p742smackMe[4024:707] -1.153451 2012-02-13 12:03:18.775 p724p742smackMe[4024:707] 1.168514 2012-02-13 12:03:18.809 p724p742smackMe[4024:707] -1.426584 2012-02-13 12:03:18.875 p724p742smackMe[4024:707] -1.297352 2012-02-13 12:03:18.942 p724p742smackMe[4024:707] -1.072046 2012-02-13 12:03:19.316 p724p742smackMe[4024:707] left
The gesture started with an involuntary shake; then the rapid acceleration to the left was detected as a positive value; finally, the rapid deceleration was detected as a negative value, and it took several tenths of a second for our delayed performance to decide that this was the end of the gesture and report a leftward slap. Of course we might try tweaking some of the magic numbers in this code to improve accuracy and performance, but a more sophisticated analysis would probably involve storing a stream of all the most recent CMAccelerometerData objects and studying the entire stream to work out the overall trend.
The inclusion of an electronic gyroscope in the panoply of onboard hardware in some devices has made a huge difference in the accuracy and speed of gravity and attitude reporting. A gyroscope has the property that its attitude in space remains constant; thus it can detect any change in the attitude of the containing device. This has two important consequences for accelerometer measurements:
It is possible to track the raw gyroscope data: make sure the device has a gyroscope, and then call startGyroUpdates
. What we get from the motion manager is a CMGyroData object, which combines a timestamp with a CMRotationRate that reports the rate of rotation around each axis, measured in radians per second, where a positive value is counterclockwise as seen by someone whose eye is pointed to by the positive axis. (This is the opposite of the direction graphed in Figure 16.7.) The problem, however, is that the gyroscope values are scaled and biased. This means that the values are based on an arbitrary scale and are increasing (or decreasing) at a roughly constant rate. Thus there is very little merit in the exercise of dealing with the raw gyroscope data.
What you are likely to be interested in is a combination of at least the gyroscope and the accelerometer. The mathematics required to combine the data from these sensors can be daunting. Fortunately, there’s no need to know anything about that. Core Motion will happily package up the calculated combination of data as a CMDeviceMotion instance, with the effects of the sensors’ internal bias and scaling already factored out. CMDeviceMotion consists of the following properties, all of which provide a triple of values corresponding to the device’s natural 3D frame (x increasing to the right, y increasing to the top, z increasing out the front):
gravity
1
pointing to the center of the earth, measured in Gs.
userAcceleration
rotationRate
rotationRate
with scale and bias accounted for.
magneticField
A CMCalibratedMagneticField describing (in its field
) the magnetic forces acting on the device, measured in microteslas. The sensor’s internal bias has already been factored out. The CMMagneticField’s accuracy
is one of the following:
CMMagneticFieldCalibrationAccuracyUncalibrated
CMMagneticFieldCalibrationAccuracyLow
CMMagneticFieldCalibrationAccuracyMedium
CMMagneticFieldCalibrationAccuracyHigh
attitude
A CMAttitude, descriptive of the device’s instantaneous attitude in space. When you ask the motion manager to start generating updates, you can ask for any of four reference systems for the attitude
(having first called the class method availableAttitudeReferenceFrames
to ascertain that the desired reference frame is available on this device):
CMAttitudeReferenceFrameXArbitraryZVertical
CMAttitudeReferenceFrameXArbitraryCorrectedZVertical
CMAttitudeReferenceFrameXMagneticNorthZVertical
CMAttitudeReferenceFrameXTrueNorthZVertical
The attitude
value’s numbers can be accessed through various CMAttitude properties corresponding to three different systems, each being convenient for a different purpose:
pitch
, roll
, and yaw
rotationMatrix
quaternion
In this example, we turn the device into a simple compass/clinometer, merely by asking for its attitude
with reference to magnetic north and taking its pitch
, roll
, and yaw
. We begin by making the usual preparations; notice the use of the showsDeviceMovementDisplay
property, which will allow the runtime to prompt the user to move the device in a figure-of-eight if the magnetometer needs calibration:
self.motman = [CMMotionManager new]; if (!self.motman.deviceMotionAvailable) { NSLog(@"oh well"); return; } CMAttitudeReferenceFrame f = CMAttitudeReferenceFrameXMagneticNorthZVertical; if (([CMMotionManager availableAttitudeReferenceFrames] & f) == 0) { NSLog(@"darn"); return; } self.motman.showsDeviceMovementDisplay = YES; self.motman.deviceMotionUpdateInterval = 1.0 / 30.0; [self.motman startDeviceMotionUpdatesUsingReferenceFrame:f]; NSTimeInterval t = self.motman.deviceMotionUpdateInterval * 10; self.timer = [NSTimer scheduledTimerWithTimeInterval:t target:self selector:@selector(pollAttitude:) userInfo:nil repeats:YES];
In pollAttitude:
, we wait until the magnetometer is ready, and then we start taking attitude readings (converted to degrees):
CMDeviceMotion* mot = self.motman.deviceMotion; if (mot.magneticField.accuracy <= CMMagneticFieldCalibrationAccuracyLow) return; // not ready yet CMAttitude* att = mot.attitude; CGFloat to_deg = 180.0 / M_PI; // I like degrees NSLog(@"%f %f %f", att.pitch * to_deg, att.roll * to_deg, att.yaw * to_deg);
The values are all close to zero when the device is level with its top pointing to magnetic north, and each value increases as the device is rotated counterclockwise with respect to an eye that has the corresponding positive axis pointing at it. So, for example, a device held upright (top pointing at the sky) has a pitch
approaching 90; a device lying on its right edge has a roll
approaching 90; and a device lying on its back with its top pointing west has a yaw
approaching 90.
There are some quirks to be aware of in the way that Euler angles operate mathematically:
roll
and yaw
increase with counterclockwise rotation from 0 to π (180 degrees) and then jump to -π (-180 degrees) and continue to increase to 0 as the rotation completes a circle; but pitch
increases to π/2 (90 degrees) and then decreases to 0, then decreases to -π/2 (-90 degrees) and increases to 0. This means that attitude
alone, if we are exploring it through pitch
, roll
, and yaw
, is insufficient to describe the device’s attitude, since a pitch
value of, say, π/4 (45 degrees) could mean two different things. To distinguish those two things, we can supplement attitude
with the z-component of gravity
:
NSLog(@"%f %f %f", att.pitch * to_deg, att.roll * to_deg, att.yaw * to_deg); CMAcceleration g = mot.gravity; NSLog(@"pitch is tilted %@", g.z > 0 ? @"forward" : @"back");
rotationMatrix
, which does not suffer from this limitation.
This next (simple and very silly) example illustrates a use of CMAttitude’s rotationMatrix
property. Our goal is to make a CALayer rotate in response to the current attitude of the device. We start as before, except that our reference frame is CMAttitudeReferenceFrameXArbitraryZVertical
; we are interested in how the device moves from its initial attitude, without reference to any particular fixed external direction such as magnetic north. In pollAttitude
, our first step is to store the device’s current attitude in a CMAttitude instance variable, ref
:
CMDeviceMotion* mot = self.motman.deviceMotion; CMAttitude* att = mot.attitude; if (!self.ref) { self.ref = att; return; }
That code works correctly because on the first few polls, as the attitude-detection hardware warms up, att
is nil, so we don’t get past the return
call until we have a valid initial attitude. Our next step is highly characteristic of how CMAttitude is used: we call the CMAttitude method multiplyByInverseOfAttitude:
, which transforms our attitude so that it is relative to the stored initial attitude:
[att multiplyByInverseOfAttitude:self.ref];
Finally, we apply the attitude’s rotation matrix directly to a layer in our interface as a transform. Well, not quite directly: a rotation matrix is a 3×3 matrix, whereas a CATransform3D, which is what we need in order to set a layer’s transform
, is a 4×4 matrix. However, it happens that the top left nine entries in a CATransform3D’s 4×4 matrix constitute its rotation component, so we start with an identity matrix and set those entries directly:
CMRotationMatrix r = att.rotationMatrix; CATransform3D t = CATransform3DIdentity; t.m11 = r.m11; t.m12 = r.m12; t.m13 = r.m13; t.m21 = r.m21; t.m22 = r.m22; t.m23 = r.m23; t.m31 = r.m31; t.m32 = r.m32; t.m33 = r.m33; CALayer* lay = // whatever; [CATransaction setDisableActions:YES]; lay.transform = t;
The result is that the layer apparently tries to hold itself still as the device rotates. The example is rather crude because we aren’t using OpenGL to draw a three-dimensional object, but it illustrates the principle well enough.
There is a quirk to be aware of in this case as well: over time, the transform has a tendency to drift. Thus, even if we leave the device stationary, the layer will gradually rotate. That is the sort of effect that CMAttitudeReferenceFrameXArbitraryCorrectedZVertical
is designed to help mitigate, by bringing the magnetometer into play.
Here are some additional considerations to be aware of when using Core Motion:
UIBackgroundModes
setting in an Info.plist. For example, you might run in the background because you’re using Core Location, and take advantage of this to employ Core Motion as well.