As a courtesy, this is a full free rendering of my book, Programming iOS 6, by Matt Neuburg. Copyright 2013 Matt Neuburg. Please note that this edition is outdated; the current books are iOS 13 Programming Fundamentals with Swift and Programming iOS 13. If my work has been of help to you, please consider purchasing one or both of them, or you can reward me through PayPal at http://www.paypal.me/mattneub. Thank you!
iOS provides various means and technologies for allowing your app to produce sound (and even to input it). The topic is a large one, so this chapter can only introduce it. You’ll want to read Apple’s Multimedia Programming Guide and Core Audio Overview.
None of the classes discussed in this chapter provide any user interface within your application for allowing the user to stop and start playback of sound. You can create your own such interface, and I’ll discuss how you can associate the “remote control” buttons with your application. Also, a web view (Chapter 24) supports the HTML 5 <audio>
tag; this can be a simple, lightweight way to play audio and to allow the user to control playback. (By default, a web view even allows use of AirPlay.) Alternatively, you could treat the sound as a movie and use the MPMoviePlayerController class discussed in Chapter 28; this can also be a good way to play a sound file located remotely over the Internet.
The simplest form of sound is system sound, which is the iOS equivalent of the basic computer “beep.” This is implemented through System Sound Services; you’ll need to import <AudioToolbox/AudioToolbox.h>
and link to AudioToolbox.framework.
You’ll be calling one of two C functions, which behave very similarly to one another:
AudioServicesPlayAlertSound
AudioServicesPlaySystemSound
kSystemSoundID_Vibrate
as the name of the “sound”).
The sound file to be played needs to be an uncompressed AIFF or WAV file (or an Apple CAF file wrapping one of these). To hand the sound to these functions, you’ll need a SystemSoundID, which you obtain by calling AudioServicesCreateSystemSoundID
with a CFURLRef (or NSURL) that points to a sound file. In this example, the sound file is in our app bundle:
NSURL* sndurl = [[NSBundle mainBundle] URLForResource:@"test" withExtension:@"aif"]; SystemSoundID snd; AudioServicesCreateSystemSoundID ((__bridge CFURLRef)sndurl, &snd); AudioServicesPlaySystemSound(snd);
However, there’s a problem with that code: we have failed to exercise proper memory management. We need to call AudioServicesDisposeSystemSoundID
to release our SystemSoundID. But when shall we do this? AudioServicesPlaySystemSound
executes asynchronously. So the solution can’t be to call AudioServicesDisposeSystemSoundID
in the next line of the same snippet, because this would release our sound just as it is about to start playing, resulting in silence. The solution is to implement a sound completion handler, a function that is called when the sound has finished playing. So, our sound-playing snippet now looks like this:
NSURL* sndurl = [[NSBundle mainBundle] URLForResource:@"test" withExtension:@"aif"]; SystemSoundID snd; AudioServicesCreateSystemSoundID((__bridge CFURLRef)sndurl, &snd); AudioServicesAddSystemSoundCompletion(snd, nil, nil, SoundFinished, nil); AudioServicesPlaySystemSound(snd);
And here is our sound completion handler, the SoundFinished
function referred to in the previous snippet:
void SoundFinished (SystemSoundID snd, void* context) { AudioServicesRemoveSystemSoundCompletion(snd); AudioServicesDisposeSystemSoundID(snd); }
Note that because we are about to release the sound, we first release the sound completion handler information applied to it. The last argument passed to AudioServicesAddSystemSoundCompletion
is a pointer-to-void that comes back as the second parameter of our sound completion handler function; you can use this parameter in any way you like, such as to help identify the sound.
If your app is going to use a more sophisticated way of producing sound, such as an audio player (discussed in the next section), it must specify a policy regarding that sound. This policy will answer such questions as: should sound stop when the screen is locked? Should sound interrupt existing sound (being played, for example, by the Music app) or should it be layered on top of it?
Your policy is declared in an audio session, which is a singleton AVAudioSession instance created automatically as your app launches. You’ll need to link to AVFoundation.framework and import <AVFoundation/AVFoundation.h>
. You’ll refer to your app’s AVAudioSession by way of the class method sharedInstance
.
Before iOS 6, it was also possible, and sometimes necessary, to talk to your audio session in C, by linking to AudioToolbox.framework and importing <AudioToolbox/AudioToolbox.h>
.
In iOS 6, the C API isn’t needed, and I don’t use it in this edition of the book.
To declare your audio session’s policy, you’ll set its category, by calling setCategory:withOptions:error:
. The basic policies for audio playback are:
AVAudioSessionCategoryAmbient
)
AVAudioSessionCategorySoloAmbient
, the default)
AVAudioSessionCategoryPlayback
)
Your audio session’s otherAudioPlaying
property can tell you whether audio is already playing in some other app, such as the Music app. Apple suggests that you might want your choice of audio session policy, and perhaps what kinds of sound your app plays, to take into account the answer to that question.
Audio session category options (the withOptions:
parameter of setCategory:withOptions:error:
) allow you to modify the playback policies to some extent. For example:
AVAudioSessionCategoryOptionMixWithOthers
). Your sound is then said to be mixable. If you don’t make your sound mixable, then mixable background audio will still be able to play, but non-mixable background audio won’t be able to play.
AVAudioSessionCategoryOptionDuckOthers
). Ducking does not depend automatically on whether your app is actively producing any sound; rather, it starts as soon as you turn this override on and remains in place until your audio session is deactivated.
It is common practice to declare your app’s initial audio session policy very early in the life of the app, possibly as early as application:didFinishLaunchingWithOptions:
. You can then, if necessary, change your audio session policy in real time, as your app runs.
Your audio session policy is not in effect, however, unless your audio session is also active. By default, it isn’t. Thus, asserting your audio session policy is done by a combination of configuring the audio session and activating the audio session. To activate (or deactivate) your audio session, you call setActive:withOptions:error:
.
The question then is when to call setActive:withOptions:error:
. This is a little tricky because of multitasking. Your audio session can be deactivated automatically if your app is no longer active. So if you want your policy to be obeyed under all circumstances, you must explicitly activate your audio session each time your app becomes active. The best place to do this is in applicationDidBecomeActive:
, as this is the only method guaranteed to be called every time your app is reactivated under circumstances where your audio session might have been deactivated in the background. (See Chapter 11 for how an app resigns and resumes active status.)
The first parameter to setActive:withOptions:error:
is a BOOL saying whether we want to activate or deactivate our audio session. There are various reasons why you might deactivate (and perhaps reactivate) your audio session over the lifetime of your app.
One such reason is that you no longer need to hog the device’s audio, and you want to yield to other apps to play music in the background. The second parameter to setActive:withOptions:error:
lets you supply a single option, AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation
(only when the first parameter is NO). By doing this, you tell the system to allow any audio suspended by the activation of your audio session to resume. After all, enforcing a Playback audio session policy that silences music that was playing in the background is not very nice if your app isn’t actively producing any sound at the moment; better to activate your Playback audio session only when your app is actively producing sound, and deactivate it when your sound finishes. When you do that along with this option, the effect is one of pausing background audio, playing your audio, and then resuming background audio (if the app providing the background audio responds correctly to this option). I’ll give an example later in this chapter.
Another reason for deactivating (and reactivating) your audio session is to bring a change of audio policy into effect. A good example is ducking. Let’s say that, in general, we don’t play any sounds, and we want background sound such as Music app songs to continue playing while our app runs. So we configure our audio session to use the Ambient policy in application:didFinishLaunchingWithOptions:
, as follows:
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryAmbient withOptions:0 error: nil];
We aren’t interrupting any other audio with our Ambient policy, so it does no harm to activate our audio session every time our app becomes active, no matter how, in applicationDidBecomeActive:
, like this:
[[AVAudioSession sharedInstance] setActive: YES withOptions: 0 error: nil];
That’s all it takes to set and enforce your app’s overall audio session policy. Now let’s say we do sometimes play a sound, but it’s brief and doesn’t require background sound to stop entirely; it suffices for background audio to be quieter momentarily while we’re playing our sound. That’s ducking! So, just before we play our sound, we duck any external sound by changing the options on our Ambient category:
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryAmbient withOptions: AVAudioSessionCategoryOptionDuckOthers error: nil];
When we finish playing our sound, we turn off ducking. This is the tricky part. Not only must we remove the ducking property from our audio session policy, but we must also deactivate our audio session to make the change take effect immediately and bring the external sound back to its original level; there is then no harm in reactivating our audio session:
[[AVAudioSession sharedInstance] setActive:NO withOptions:0 error:nil]; [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryAmbient withOptions: 0 error: nil]; [[AVAudioSession sharedInstance] setActive:YES withOptions: 0 error:nil];
Your audio session can be interrupted. This could mean that some other app deactivates it: for example, on an iPhone a phone call can arrive or an alarm can go off. In the multitasking world, it could mean that another app asserts its audio session over yours. You can register for a notification to learn of interruptions:
AVAudioSessionInterruptionNotification
To learn whether the interruption began or ended, examine the AVAudioSessionInterruptionTypeKey
entry in the notification’s userInfo
dictionary; this will be one of the following:
AVAudioSessionInterruptionTypeBegan
AVAudioSessionInterruptionTypeEnded
In the latter case, the AVAudioSessionInterruptionOptionKey
entry may be present, containing an NSNumber wrapping AVAudioSessionInterruptionOptionShouldResume
; this is the flip side of AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation
, which I mentioned earlier: some other app that interrupted you has now deactivated its audio session, and is telling you to feel free to resume your audio.
Audio session notifications are new in iOS 6. Previously, it was necessary to set an audio session delegate, or install a handler function by way of the C API.
Interruptions are not as intrusive as you might suppose. When your audio session is interrupted, your audio has already stopped and your audio session has been deactivated; you might respond by altering something about your app’s user interface to reflect the fact that your audio isn’t playing, but apart from this there’s no particular work for you to do. When the interruption ends, on the other hand, activating your audio session and possibly resuming playback of your audio might be up to you. Even this may not be necessary, however; if you use an audio player (AVAudioPlayer, discussed in the next section), it activates your audio session for you, and typically resumes playing, when an interruption ends.
In the multitasking world, when your app switches to the background, your audio is paused (unless your app plays audio in the background, as discussed later in this chapter). Various things can happen when your app comes back to the front. Again, if you were playing audio with an AVAudioPlayer, it’s possible that the AVAudioPlayer will handle the entire situation: it will automatically reactivate your audio session and resume playing, and you won’t get any interruption notifications.
If you’re not using an AVAudioPlayer, however, it is likely that being moved into the background will count as an interruption of your audio session. You don’t get any notifications while you’re suspended in the background, so everything happens at once when your app comes back to the front: you’ll be notified that the interruption began, then notified that it ended, and then your applicationDidBecomeActive:
will be called, all in quick succession (and in that order). Make sure that your responses to these events, arriving in a sudden cluster, don’t step on each other’s toes somehow.
When the user double-taps the Home button to reach the application switcher and uses the Play button to resume the current Music app song, you get a notification that an interruption began; if the user then double-taps the Home button again to return from the application switcher to your app, you get applicationDidBecomeActive:
, but you do not get any notification that the interruption has ended (and an AVAudioPlayer does not automatically resume playing). This seems incoherent.
Your audio is routed through a particular output (and input). The user can make changes in this routing — for example, by plugging headphones into the device, which causes sound to stop coming out of the speaker and to come out of the headphones instead. By default, your audio continues uninterrupted if any is playing, but your code might like to be notified when routing is changed. You can register for AVAudioSessionRouteChangeNotification
to hear about routing changes.
The notification’s userInfo
dictionary is chock full of useful information about what just happened. You’re given a description of the new route and possibly the old route, along with a summation of what changed and why. Here’s NSLog’s display of the dictionary that results when I detach headphones from the device:
AVAudioSessionRouteChangePreviousRouteKey = "<AVAudioSessionRouteDescription: 0x1f028840, inputs = (null); outputs = ( "<AVAudioSessionPortDescription: 0x1f02af30, type = Headphones; name = Headphones; UID = Wired Headphones; channels = ( "<AVAudioSessionChannelDescription: 0x1f02af80, name = Headphones Left; number = 1; port UID = Wired Headphones>", "<AVAudioSessionChannelDescription: 0x1f02afa0, name = Headphones Right; number = 2; port UID = Wired Headphones>" )>" )>"; AVAudioSessionRouteChangeReasonKey = 2;
The classes mentioned here — AVAudioSessionRouteDescription, AVAudioSessionPortDescription, AVAudioSessionChannelDescription — are all value classes (glorified structs). For the meaning of the AVAudioSessionRouteChangeReasonKey, see the AVAudioSession class reference; the value here, 2
, is AVAudioSessionRouteChangeReasonOldDeviceUnavailable
— we stopped using the headphones because there are no headphones any longer. A routing change may not of itself interrupt your sound, but Apple suggests that in this particular situation you might like to respond by stopping your audio deliberately, possibly giving the user the option of resuming it, because otherwise sound may now suddenly be coming out of the speaker in a public place.
An audio player is an instance of the AVAudioPlayer class. This is the easiest way to play sounds with any degree of sophistication. A wide range of sound types is acceptable, including MP3, AAC, and ALAC, as well as AIFF and WAV. You can set a sound’s volume and stereo pan features, loop a sound, synchronize the playing of multiple sounds simultaneously, change the playing rate, and set playback to begin somewhere in the middle of a sound. New in iOS 6, you can even tell the audio player what output channels of the device to use in producing its sound.
To use an audio player, you’ll need to link to AVFoundation.framework and import <AVFoundation/AVFoundation.h>
.
An audio player should always be used in conjunction with an audio session; see the previous section.
Not every device type can play a compressed sound format in every degree of compression, and the limits can be difficult or impossible to learn except by experimentation. I encountered this issue when an app of mine worked correctly on an iPod touch 32GB but failed to play its sounds on an iPod touch 8GB (even though the latter was newer). Even more frustrating, the files played just fine in the Music app on both devices. The problem appears to be that the compression bit rate of my sound files was too low for AVAudioPlayer on the 8GB device, but not on the 32GB device. But there is no documentation of the limits involved.
An audio player can possess and play only one sound, but it can play that sound repeatedly, and you can have multiple audio players, possibly playing simultaneously. An audio player is initialized with its sound, using a local file URL or NSData. To play the sound, first tell the audio player to prepareToPlay
, causing it to load buffers and initialize hardware; then tell it to play
. The audio player’s delegate (AVAudioPlayerDelegate) is notified when the sound finishes playing (audioPlayerDidFinishPlaying:successfully:
); do not repeatedly check the audio player’s playing
property to learn its state. Other useful methods include pause
and stop
; the chief difference between them is that pause
doesn’t release the buffers and hardware set up by prepareToPlay
, but stop
does (so you’d want to call prepareToPlay
again before resuming play). Neither pause
nor stop
changes the playhead position (the point in the sound where playback will start if play
is sent again); for that, use the currentTime
property.
In a WWDC 2011 video, Apple points out that simultaneously playing multiple sounds that have different sample rates is computationally expensive, and suggests that you prepare your sounds beforehand by converting them to a single sample rate. Also, decoding AAC is faster and less expensive than decoding MP3.
Devising a strategy for instantiating, retaining, and releasing your audio players is up to you. In one of my apps, I use a class called Player, which implements a play:
method expecting a string path to a sound file in the app bundle. This method creates a new audio player, stores it as an instance variable, and tells it to play the sound file; it also sets itself up as that audio player’s delegate, and emits a notification when the sound finishes playing. In this way, by maintaining a single Player instance, I can play different sounds in succession:
- (void) play: (NSString*) path { NSURL *fileURL = [[NSURL alloc] initFileURLWithPath: path]; NSError* err = nil; AVAudioPlayer *newPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL: fileURL error: &err]; // error-checking omitted self.player = newPlayer; // retain policy [self.player prepareToPlay]; [self.player setDelegate: self]; [self.player play]; } - (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player // delegate method successfully:(BOOL)flag { [[NSNotificationCenter defaultCenter] postNotificationName:@"soundFinished" object:nil]; }
Here are some useful audio player properties:
pan
, volume
numberOfLoops
0
(the default) means it doesn’t repeat. A negative value causes the sound to repeat indefinitely (until told to stop
).
duration
currentTime
play
will start at the currentTime
. You can set this in order to “seek” to a playback position within the sound.
enableRate
, rate
0.5
) to double speed (2.0
). Set enableRate
to YES before calling prepareToPlay
; you are then free to set the rate
.
meteringEnabled
updateMeters
followed by averagePowerForChannel:
and/or peakPowerForChannel:
, periodically, to track how loud the sound is. Presumably this would be so you could provide some sort of graphical representation of this value in your interface.
settings
AVEncoderBitRateKey
), its sample rate (AVSampleRateKey
), and its data format (AVFormatIDKey
).
The playAtTime:
method allows playing to be scheduled to start at a certain time. The time should be described in terms of the audio player’s deviceCurrentTime
property.
As I mentioned in the previous section, an audio player resumes playing when your app comes to the front if it was playing and was forced to stop playing when your app was moved to the background. There are delegate methods audioPlayerBeginInterruption:
and audioPlayerEndInterruption:withOptions:
, but my experience is that the audio player will normally resume playing automatically and the delegate won’t be sent these messages at all. In fact, I have yet to discover a situation in which audioPlayerEndInterruption:withOptions:
is ever called when your app is in the foreground (active); it may, however, be called when your app is capable of playing sound in the background, as I’ll explain later in this chapter.
Various sorts of signal constitute remote control. There is hardware remote control; the user might be using earbuds with buttons, for example. There is also software remote control — for example, the playback controls that you see when you double-click the Home button to view the fast app switcher and then swipe to the right (Figure 27.1). Similarly, the buttons that appear if you double-click the Home button when the screen is locked and sound is playing are a form of software remote control (Figure 27.2).
Your app can arrange to be targeted by remote control events reporting that the user has tapped a remote control. This is particularly appropriate in an app that plays sound. Your sound-playing app can respond to the remote play/pause button, for example, by playing or pausing its sound.
Remote control events are a form of UIEvent, and they are sent initially to the first responder. (See Chapter 11 and Chapter 18 on UIResponders and the responder chain.) To arrange to be a recipient of remote control events:
canBecomeFirstResponder
, and that responder must actually be first responder.
remoteControlReceivedWithEvent:
.
beginReceivingRemoteControlEvents
.
A typical place to put all of this is in your view controller, which is, after all, a UIResponder:
- (BOOL)canBecomeFirstResponder { return YES; } - (void) viewDidAppear:(BOOL)animated { [super viewDidAppear: animated]; [self becomeFirstResponder]; [[UIApplication sharedApplication] beginReceivingRemoteControlEvents]; } - (void)remoteControlReceivedWithEvent:(UIEvent *)event { // ... }
The question is then how to implement remoteControlReceivedWithEvent:
. Your implementation will examine the subtype
of the incoming UIEvent to decide what to do. There are many possible subtype
values, listed under UIEventSubtype in the UIEvent class documentation; they have names like UIEventSubtypeRemoteControlPlay
. A minimal implementation will respond to UIEventSubtypeRemoteControlTogglePlayPause
. Here’s an example in an app where sound is produced by an AVAudioPlayer:
- (void)remoteControlReceivedWithEvent:(UIEvent *)event { UIEventSubtype type = event.subtype; if (type == UIEventSubtypeRemoteControlTogglePlayPause) { if ([if self.player isPlaying]) [self.player pause]; else [self.player play]; } }
You can also influence what information the user will see in the remote control interface about what’s being played. For that, you’ll use MPNowPlayingInfoCenter; you’ll need to link to MediaPlayer.framework and import <MediaPlayer/MediaPlayer.h>
. Call the class method defaultCenter
and set the resulting instance’s nowPlayingInfo
property to a dictionary. The relevant keys are listed in the class documentation; they will make more sense after you’ve read Chapter 29, which discusses the Media Player framework. The code (from my TidBITS News app) that actually produced the interface shown in Figure 27.1 and Figure 27.2 is as follows:
MPNowPlayingInfoCenter* mpic = [MPNowPlayingInfoCenter defaultCenter]; mpic.nowPlayingInfo = @{ MPMediaItemPropertyTitle:self.titleLabel.text, MPMediaItemPropertyArtist:self.authorLabel.text };
In the multitasking world, when the user switches away from your app to another app, by default, your app is suspended and stops producing sound. But if the business of your app is to play sound, you might like your app to continue playing sound in the background. In earlier sections of this chapter, I’ve spoken about how your app, in the foreground, relates its sound production to background sound such as the Music app. Now we’re talking about how your app can be that background sound, possibly playing sound while some other app is in the foreground.
To play sound in the background, your app must do these things:
UIBackgroundModes
) with a value that includes “App plays audio” (audio
).
If those things are true, then the sound that your app is playing when the user clicks the Home button and dismisses your application, or switches to another app, will go right on playing.
When the screen is locked, your app can continue to play sound only if it is capable of playing sound in the background.
Moreover, your app may be able to start playing in the background even if it was not playing previously — namely, if it is mixable (AVAudioSessionCategoryOptionMixWithOthers
, see earlier in this chapter), or if it is capable of being the remote control target. Indeed, an extremely cool feature of playing sound in the background is that remote control events continue to work. Even if your app was not actively playing at the time it was put into the background, it may be the remote control target (because it was playing sound earlier, as explained in the preceding section). In that case, if the user causes a remote control event to be sent, your app, if suspended in the background, will be woken up (still in the background) in order to receive the remote control event and can begin playing sound. However, the rules for interruptions still apply; another app can interrupt your app’s audio session while your app is in the background, and if that app receives remote control events, then your app is no longer the remote control target.
If your app is the remote control target in the background, then another app can interrupt your app’s audio, play some audio of its own, and then deactivate its own audio session with the option telling your app to resume playing. I’ll give a minimal example of how this works with an AVAudioPlayer.
Let’s call the two apps BackgroundPlayer and Interrupter. Suppose Interrupter has an audio session policy of Ambient. This means that when it comes to the front, background audio doesn’t stop. But now Interrupter wants to play a sound of its own, temporarily stopping background audio. To pause the background audio, it sets its own audio session to Playback:
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback withOptions:0 error:nil]; [[AVAudioSession sharedInstance] setActive:YES withOptions:0 error:nil]; [self.player setDelegate: self]; [self.player prepareToPlay]; [self.player play];
When Interrupter’s sound finishes playing, its AVAudioPlayer’s delegate is notified. In response, Interrupter deactivates its audio session with the AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation
option; then it’s fine for it to switch its audio session policy back to Ambient and activate it once again:
[[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:nil]; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryAmbient withOptions:0 error:nil]; [[AVAudioSession sharedInstance] setActive: YES withOptions:0 error:nil];
So much for Interrupter. Now let’s turn to BackgroundPlayer, which was playing in the background when Interrupter came along and changed its own policy to Playback. When Interrupter changes its own policy to Playback, BackgroundPlayer’s sound is interrupted; it stops playing, and its AVAudioPlayer delegate is sent audioPlayerBeginInterruption:
. When Interrupter deactivates its audio session, BackgroundPlayer’s AVAudioPlayer delegate is sent audioPlayerEndInterruption:withOptions:
. It tests for the resume option and, if it is set, starts playing again:
-(void)audioPlayerEndInterruption:(AVAudioPlayer *)p withOptions:(NSUInteger)opts { if (opts & AVAudioSessionInterruptionOptionShouldResume) { [p prepareToPlay]; [p play]; } }
An interesting byproduct of your app being capable of playing sound in the background is that while it is playing sound, a timer can fire. The timer must have been created and scheduled in the foreground, but after that, it will fire even while your app is in the background, unless your app is currently not playing any sound. This is remarkable, because many sorts of activity are forbidden when your app is running in the background.
Another byproduct of your app playing sound in the background has to do with app delegate events. In Chapter 11, I said that your app delegate will probably never receive the applicationWillTerminate:
message, because by the time the app terminates, it will already have been suspended and incapable of receiving any events. However, an app that is playing sound in the background is not suspended, even though it is in the background. If it is terminated while playing sound in the background, it will receive applicationDidEnterBackground:
, even though it has already received this event previously when it was moved into the background, and then it will receive applicationWillTerminate:
.
iOS is a powerful milieu for production and processing of sound; its sound-related technologies are extensive. This is a big topic, and an entire book could be written about it (in fact, such books do exist). I’ll talk in Chapter 29 about accessing sound files in the user’s music library. But here are some further topics that there is no room to discuss here: