Basic capturing video from a device consists of a couple of steps. Finding a device, opening it, creating a capture session and setting input and output to it. More detailed info is found here. Next code snippet shows how to start capturing a video and displaying it on the QTCaptureView (ARC compatible code).
QTCaptureDevice *captureDevice = [QTCaptureDevice defaultInputDeviceWithMediaType:QTMediaTypeVideo];
if (captureDevice)
{
NSError *error = nil;
if ([captureDevice open:&error])
{
QTCaptureDeviceInput *deviceInput = [[QTCaptureDeviceInput alloc] initWithDevice:captureDevice];
QTCaptureSession *captureSession = [[QTCaptureSession alloc] init];
if ([captureSession addInput:deviceInput error:&error])
{
[self.captureView setCaptureSession:captureSession];
[captureSession startRunning];
}
else
{
NSLog(@"%s Failed adding input device to session (device = %@, session = %@) with error (%@)", __func__, [captureDevice localizedDisplayName], captureSession, [error localizedDescription]);
}
}
else
{
NSLog(@"%s Failed opening device (%@) with error (%@)", __func__, [captureDevice localizedDisplayName], [error localizedDescription]);
}
}
In my sample application I am using QTCaptureLayers as the outputs of the capture sessions.
In CaptureViewController.m (ARC compatible code)
- (void)startCapturing
{
self.capturing = YES;
for (QTCaptureDevice *captureDevice in [QTCaptureDevice inputDevicesWithMediaType:QTMediaTypeVideo])
{
NSError *error = nil;
if ([captureDevice open:&error])
{
QTCaptureDeviceInput *deviceInput = [[QTCaptureDeviceInput alloc] initWithDevice:captureDevice];
QTCaptureSession *captureSession = [[QTCaptureSession alloc] init];
if ([captureSession addInput:deviceInput error:&error])
{
QTCaptureLayer *sublayer = [QTCaptureLayer layerWithSession:captureSession];
CGColorRef color = CGColorCreateGenericGray(0.8, 1.0);
sublayer.backgroundColor = color;
CGColorRelease(color);
[[self.view layer] addSublayer:sublayer];
[captureSession startRunning];
[self _updatePixelBufferAttributesForSession:captureSession];
}
else
{
NSLog(@"%s Failed adding input device to session (device = %@, session = %@) with error (%@)", __func__, [captureDevice localizedDisplayName], captureSession, [error localizedDescription]);
}
}
else
{
NSLog(@"%s Failed opening device (%@) with error (%@)", __func__, [captureDevice localizedDisplayName], [error localizedDescription]);
}
}
}
The trick of getting video from multiple devices simultaneously sits in the _updatePixelBufferAttributesForSession: method. The issue is that USB bus has limited bandwidth and when using multiple devices it is impossible to get video simultaneously. So the solution would be to set pixelBufferAttributes of the output.
- (void)_updatePixelBufferAttributesForSession:(QTCaptureSession *)session
{
NSNumber *preferredHeight = [[NSUserDefaults standardUserDefaults] objectForKey:kLookoutPreferredVideoHeight];
NSNumber *preferredWidth = [[NSUserDefaults standardUserDefaults] objectForKey:kLookoutPreferredVideoWidth];
for (QTCaptureVideoPreviewOutput *output in [session outputs])
{
NSDictionary *attributes = [NSDictionary dictionaryWithObjectsAndKeys:preferredWidth, (id)kCVPixelBufferWidthKey, preferredHeight, (id)kCVPixelBufferHeightKey, nil];
[output setPixelBufferAttributes:attributes];
}
}
What then happens is that QTKit takes the preferred resolution and configures device output resolution for optimal performance taking account what resolution is set by the pixelBufferAttributes. This means that the actual resolution what the device is using might be different than set by the pixelBufferAttributes.
Download
Lookout (binary)
Lookout source (github)