View Categories

Cameras

ADDING A NEW CAMERA

Use the Main menu button, or right-click in the live camera window to add a new camera.

Before you are taken to the full camera settings window, you are first asked for some basic information, such as the camera name and type, as well as come common options.

A camera has two names—a full or “long” name and a “short” name. The full name may be more descriptive and allows a wider range of characters to be used. The short name is used for URLs (web addresses) and filenames, so the characters you may use are more restrictive. It may be more of an “abbreviation” for the full name.

Both the full name and the short name must be unique among all cameras and camera group names. The short name may not both begin and end with a number—it must begin or end with a letter. This allows the software to parse the camera short name from a filename.

The most common type of camera used with Blue Iris is a Network IP camera, one connected with either Ethernet or WiFi. You can use cameras connected in other ways provided you have Windows DirectShow drivers (not proprietary, requiring special software for use).

For quick addition of cameras with similar configurations, you may choose to copy the new camera’s settings from another camera.

When you click Ok, you are taken to the full camera settings window’s Video page. In the case of a Network IP camera, the Network IP Configure page is also opened automatically.

VIDEO SETTINGS

IP cameras
From the Video page, the Network IP Configure button opens this window:

Address
If known, enter a valid camera IP address or network host name in the Address box, as well as the camera’s user name and password. If you leave the address box blank, you can try the Find/Inspect button to use UPnP to discover cameras connected to the same LAN segment as your PC. Here’s an example of what it might find:

You may then double-click an item on this list to then immediately inspect the camera using the ONVIF protocol:

If the camera discovery does not find your camera, you may still configure an ONVIF camera by entering its address and then using the Find/Inspect button.

There are reasons for and against configuring a camera as ONVIF in Blue Iris. If supported, configuring the camera in this way can be simpler, and in general features such as audio and PTZ will work as well as video. It’s also necessary to use ONVIF if you want to use camera- based triggering in most cases. The downside is that there is some functionality which is camera-specific and/or camera-manufacturer-specific, such as the ability to “talk” or send audio to the camera. Some PTZ/control and DIO features may also not be fully implemented via a generic ONVIF configuration. To maintain the best of both, you can always configure using ONVIF and then select the specific make/model for your camera or a compatible model if listed.

Make/Model
There are hundreds of compatible cameras here, grouped by “make” or manufacturer. If a compatible camera entry is found here, it is generally preferable to use this rather than the generic category. The audio and PTZ functions will be automatically configured as well based on this selection.

If a compatible camera cannot be identified, and the camera is not responding to ONVIF inspection, you may contact Blue Iris support to have it evaluated for compatibility. You will be asked to supply a WAN address for with all applicable ports forwarded to the device for testing.

Protocol
For most IP cameras, you should retain HTTP as the protocol. For some cameras, HTTPS may be used for JPEG image retrieval only, but this is not common.

Other protocols are listed and these are used to configure the camera using a generic URL such as RTSP:// or MMSH:// etc. This is not recommended unless the camera has no HTTP interface. You may enter the full URL into the address box, press Tab, and it will be automatically parsed for you into protocol, address, and video path.

Ports
The camera’s HTTP port (if not the default 80) should be added to the address, such as 192.168.0.1:8000.

If the camera uses RTSP, RTMP, or another video protocol over a port separate from the HTTP (many DVRs use proprietary video ports), that port number must be specified in the media/video/RTSP port box. The default port for RTSP is 554, which is the most commonly used type of video streaming for Network IP cameras. Some models use RTMP, which has a default of 1935.

The discovery/ONVIF port may be set automatically during the Find/Inspect operation, but setting it manually may be required in some cases. This port number is later used for PTZ operations for camera configured to use the ONVIF protocol for PTZ. It is also used for the ONVIF GetEvents function which is required for camera-based triggering.

Video and audio paths
There will typically be populated automatically.

The camera number selection will replace the macro {CAMNO} in a video path. It’s also used internally for some DVR implementations to select the camera number. If the video path has the camera number “hard coded” such as &camera=0, you may need to directly modify this as necessary.

The Setup RTSP back channel for talk option is used in rare cases. In most cases, the ability to talk or to send audio to the camera is a implemented on a per-camera basis and that requires that this box be un-checked. One device known to actual implement this feature is the Doorbird doorbell camera.

Network options
The receive buffer is generally set large enough and you should not need to alter this. However for very high bitrate cameras (8192 kbps and above perhaps) on very busy systems (high CPU utilization) it may be required to increase this to up to 20MB to avoid a buffer overrun or dropped packets.

BlueIris

The RTP/UDP option is for certain RTSP streaming connections. If your camera is not producing a stream in Blue Iris, yet works with the VLC software using the same RTSP video path, there’s a chance that VLC is using RTP/UDP. Blue Iris does not use this by default as it’s less reliable and requires the use of multiple ports for streaming. With this option you will need to specify a port number, however 4 ports are actually used, so this number must be evenly divisible by 4.

RTSP “keep alives” are reply packets sent occasionally to the camera. Most cameras either require them or tolerate them. In rare cases, these “break” the stream and will cause continuous re-connects each 20-30 seconds.

The RTSP and other video protocols include time code—which is a way to keep video frames in proper playback order and timing. However, this time code is not always accurate, so it’s possible to disable it here. The software will otherwise use a combination of realtime and frame rates to synthesize the time code.

The software by default first verifies the camera is online by “pinging” its HTTP port before continuing on to video streaming. This allows a check for address redirection, and possibly a session key or cookie as well that may be required for other functionality on some models. You may elect to bypass this functionality here by selecting Skip initial HTTP DNS and reachability tests.

The Decoder compatibility mode exists primarily to offer an alternative JPEG decoder which may be more compatible with some cameras, at the expense of CPU time. For RTSP streaming, it also causes the software to ignore dropped packets and to proceed with decoding regardless. This may result in more frames processed, but there may be incomplete frames as a result, showing video glitching.

Use Get events with PullPointSubscription only with a camera that’s been setup with ONVIF, has a valid ONVIF port number, and for which you would like to use camera-based triggering. The software will query the device for trigger and alert information during operation.

MORE VIDEO SETTINGS
When not adding a network IP camera, this is the first page you will see instead. Many settings are valid for all camera types as well.

The Screen capture device offers a method to use the PC screen as a camera source. As the service process generally cannot interact with the desktop UI, it may not be possible to use this option when running as a service.

USB, Firewire (IEEE-1394) and Analog cameras (via digitizing device) may be added. In all cases, compatible Windows DirectShow drivers must be available. That is, the camera should be working outside of Blue Iris using general Windows software such as Movie Maker or AmCap. If the device requires proprietary software from the manufacturer for operation, it may not be compatible with Blue Iris. When a compatible device is selected, you may edit its Advanced properties:

If you are unable to get video from your camera initially, it’s worth trying an opposite setting for both the YUY2 and Preview pin options.

It’s possible to open two property pages that are implemented by the camera’s driver here. In some cases this may be required to configure a camera to use MJPG compression, which may be the only way it’s able to supply 30fps video at HD resolution for example.

The Video Proc-Amp section is used to have Blue Iris memorize the driver’s proc-amp settings on a per-profile basis. When the camera’s effective active profile changes, these settings will be sent to the camera driver. These settings typically include things like brightness, contrast, color mode, etc. This can be used to force the camera into high- contrast black and white mode at night for example.

The Broadcast from client app device type may be used in conjunction with the iOS phone app. From the app you may begin a stream from the phone to your Blue Iris in order to use the phone as a video source.

Image format
In the case of a Network IP camera, the image size and max frame rate are determined for you based on what the camera is sending to Blue Iris. If these are not as you expect, they need to be set in the camera’s browser interface directly. In some cases, there may be parameters as part of the video URL which control these settings however.

The max frame rate will always adjust higher, never lower. This value is used internally by Blue Iris to allocate buffers only, and is not “settable” for network IP cameras. However it is used to adjust the FPS (frames per second) on USB and other camera types.

The Anamorphic option will unlock the image size field for network IP cameras. The actual resolution from the camera will not change, but you can scale or stretch the image as required to obtain the proper aspect ratio or appearance that is required.

Flip and Rotate settings are offered to account for cameras mounted upside down, at mirrors, etc.

The AOI (area of interest) setting may be used with network IP cameras to select a sub-set of the camera’s video frame as the video source.

A video Delay setting may be used if video arrives much earlier than audio in an attempt to force audio-video synchronization. This is specified in milliseconds (ms).

The option to De-interlace video exists only for older analog video sources which merged two video images (fields) into a single frame. This is now rare and deprecated.

Settings for 360, Hardware decoding, Limit decoding, and Overlays are discussed in a section to follow, Advanced Video Topics.

AUDIO SETTINGS

You may need to visit this page to enable audio streaming from your camera if it was not enabled initially at the New camera window.

Source
IP camera stream is the proper selection for most network IP cameras. Firewire cameras once offered a multiplexed audio-video stream, and for this you would select DirectShow camera. Otherwise, most USB cameras actually use separate devices for audio and video and you must select the audio device here with Other DirectShow device.

Instead of using the camera’s own audio, it’s also possible to mirror the audio from another camera to this one by selecting that camera here.

Options
The Gain slider is used to adjust the camera’s relative volume when it’s not possible or convenient to adjust the camera volume in another way (as through the direct browser interface or property pages in the case of a DirectShow source). This function applies a multiplication factor to all samples received, so it may result in “clipping” at higher values.

An audio Delay setting may be used if audio arrives much earlier than video in an attempt to force audio-video synchronization. This is specified in milliseconds (ms).

By default audio is always recorded and always streamed to web and phone clients. For privacy, legal, or other concerns, you may select to disable this during specific active profiles.

Audio triggering
You may wish to trigger the camera for recording and alerts based on loud-enough sounds. By default, an average sound level is measured over a one second time period and this is used to compare to a sensitivity threshold value. Without the averaging it’s possible that a single sound sample of sufficient amplitude may trigger the camera. The overall sensitivity of the trigger also may be adjusted using a slider control. For testing, a level meter is shown. The camera will trigger when this peaks—and the meter will turn red to show this condition.

Talk
Using a microphone to talk or send audio to the camera is supported for many models. This is completely dependent on the make/model selection on the network IP camera configuration page from the Video page described above in this chapter. If talk is not supported by Blue Iris on the chosen model, the sound will be played from the PC’s speakers instead.

You may select on this page a WAV sound file to be played when talk begins (a “hail” sound clip) and when talk ends (a “goodbye” sound clip). Think Star Trek communicator sounds here for example.

PTZ/CONTROL
Although the PTZ page primarily deals with a camera’s ability to Pan, Tilt, Zoom, there are a number of other functions enabled via this page for camera control. Examples include image exposure, DIO (digital I/O) and camera reset.

When using a network IP camera, the type of camera has likely already been pre-populated here for you. Rarely for USB cameras, a PTZ interface is exposed through the DirectShow driver and that may be selected here.

Many analog camera systems have separate serial port-based PTZ motors/drivers. These generally operate using one of several common formats such as Pelco-P or D. You must know the LID (logical ID) and serial port format (baud rate, stop bits, etc.) in order for this to be properly configured. If your network IP camera uses Pelco over the Ethernet connection, you would still use the Network IP option, not serial port.

An External script or program may be called for each PTZ command, allowing you to handle this yourself external to Blue Iris. The parameter send to the script is a simple command such as UP, DOWN, LEFT, RIGHT, etc.

Options
You may use a combination of settings to adjust for cameras mounted in different ways, or whose drivers expect mirrored commands. These include Reverse tilt, Reverse pan, Reverse zoom, Reverse focus, Reverse IR lights, and Rotate 90.

Select Require admin to prevent non-administrators from manipulating your camera positions. Individual users on the Users page in Settings may be granted PTZ control as well.

Select PTZ w/mouse cursor to allow the user to adjust camera position by clicking directly on the camera window. This applies only when the camera is solo or in a full-screen mode.

You may also un-select the option to Enable PTZ UI. If disabled, it will not possible to use the PTZ controls in the main window UI when this camera is selected.

By default, motion detection is suspending during PTZ operation and for 1.5 seconds afterward (move button is released). You may disable this behavior here.

By default, preset-cycle and scheduled PTZ events are disabled for a period of 30 seconds following any manual PTZ activity and this may be adjusted here. This prevents the user from “fighting” with these other functions.

When using motion zones (described in Triggering and Motion detection below) you may choose to “invalidate” these when there is manual user PTZ input. Typically your zones are configured against a particular background, and if the camera position changes, these may not longer be valid. However, if you stick to using preset positions instead of arbitrary directional movements, there is a feature to create a custom motion zone map for each preset.

For cameras which support PTZ speed, you may select an initial value here. You may then adjust this using the right-click menu over a camera window.

Preset positions
As motion detection may be suspended during manual movements, it may also be suspended for a preset number of seconds following a preset position change. This is called the “max travel time.” It’s possible to apply this motion suspension to all cameras which are members of any of this camera’s groups with the Apply to group/s setting.

Auto-cycle patrol may be enabled here, but it also has an icon in the PTZ pane in the main window UI. If you select to auto-reverse the cycle, it will “bounce” from end to end such as 1-2-3-4-3-2-1 rather than 1-2-3-4-1-2-3-4.

Click Edit presets to adjust individual preset settings.

You may specify up to 40 presets per camera. Although it is most straightforward to configure these 1:1 against what you have configured in the camera, it’s possible to set these arbitrarily as well.

The # column refers to the button number in the main window UI. The name/value is what is used or sometimes “sent” to the camera for each button. The description is typically just FYI, but may be significant to the camera as well.

Use the Call button to send the command to the camera immediately for testing. Use Clear to remove the name/value and description. Only presets with these values set may be used in the UI or remotely via client app or browser.

Use the On call… button to define an action set to be executed in response to use of the preset button. This occurs in addition to what’s sent to the camera.

You may choose whether each preset participates in the auto-cycle patrol function, and during which active profiles. You may also select how long to wait after calling this preset before calling the next. If you select to Delay with motion sense, the next preset will not be called until the camera is un-triggered and is no longer detecting motion.

Each preset may have its own motion zone map to override the one defined for the camera on the Motion Detection page from the Trigger page. Keep in mind that these maps are resolution-dependent—if the camera image size changes, the map will also change. If a custom preset map is in use and another preset is called without a custom map, the software will reload the map from the Motion Detection page. See that topic later in this chapter for instructions on defining a motion zone map.

If certain presets require more or less time to move into position than others, you may select to override the value set on the PTZ page here.

You may specify one or more global DIO bits to be monitored in order to automatically send the selected preset position to the camera. All specified DIO bits must be active in order to trigger.

Position images
Each preset position may have an associated camera image saved to disc. This image is used to identify the preset position via the UI3 browser interface currently, but may be used for other clients in the future.

ADVANCED VIDEO TOPICS

360
Use this setting to inform the software that this camera has a fish-eye lens. If this camera is mounted on a ceiling, this can capture a 360-degree view. Mounted on a wall or door, this creates a 180-degree panorama. You setting here determine how the camera is de-warped when viewed via either the clip Viewer window or a remote client.

As it may be extremely CPU-intensive to render the video in this way, it is not currently offered for live video display.

Many cameras with this feature also offer multiple video streams that can be used to break up the warped view into several de-warped views. In this case, you may consider adding multiple camera windows to the software, each requesting a separate view from the camera.

Hardware decoding
With appropriate hardware, it’s possible to “off-load” some of the processing that’s required to view or otherwise process live video to the CPU or GPU (graphics card). The software has integrated support for both Intel QuickSync as well as Nvidia hardware decoding (NVDEC). Please see ark.intel.com to learn which chips have this capability and to compare their relative performance. Please see the following link to learn which Nvidia hardware offers this capability:

https://developer.nvidia.com/video-encode-decode-gpu-support-matrix

This may be enabled globally on the Cameras page in Settings, or here on the Video page per-camera. You may use a mix of these as well for specific cameras.

In testing, there are limits to the number of cameras that may be assigned to decode video via hardware and this will depend on the hardware as well as the overall MP/s (megapixels/ second, based on FPS and frame size) being processed by the camera. When limits are reached, actual FPS will be seen to decline from the camera, indicating that the processing thread is saturated as it waits for the hardware to complete decoding.

Not all cameras or camera stream formats will be compatible with this technology. The software will attempt to disable this feature and return to software decoding if necessary, and this condition should be logged to the Messages page in Status. For the best chance at compatibility, you should insure that the stream is encoded as “simple” as possible via direct camera settings. H.264 “main” profile without manufacturer-specific add-ons such as + or “smart” modes is most likely to be compatible. However as hardware decoding and their associated drivers improve, newer encoding methods such as H.265 and “high” profiles may also be supported soon if not today.

With Intel decoding, you have the option to add VideoPostProc. This uses hardware as well for colorspace conversion. The H.264/265 codecs use a YUV variant, whereas Blue Iris must use RGB for motion detection and display. The chipset may provide some assistance with this conversion.

The Also BVR checkbox will cause the software to also attempt hardware decoding for BVR clips that were recorded by this camera. As hardware decoding may add a delay (one or several frames) as it creates a “pipeline” for decoding, this may cause an initial “black screen” and then interfere with the smooth use of video scrubbing both in the viewer window and when remote viewing.

Limit decoding
The option to Limit decoding unless required is another way to manage CPU resources. When enabled, only key frames are normally decoded and displayed. A key frame is a “complete” frame—all other frames rely on key frames in order to be rendered, as they contain only the “changes” from frame to frame. When you select the camera in the main window UI, or if someone is viewing the camera (or one of its groups) via a client app, then all frames will once again be decoded for display.

This CPU-saving scheme works great as long as your camera is actually sending an adequate number of key frames. The recommendation is to have about 1 key frame/second. This is a setting in the camera’s browser-based settings, usually under a “video encoding” section. It may be labeled as “key frame rate” or “i-frame interval” for example. You can view the actual rate on either the General page in camera settings, or on the Cameras page in Status. It is shown after the overall frame rate—for example 15.0/1.0 indicates 15 fps with 1 key frame/second. If value of 0.5 or less is considered insufficient to use this feature.

Overlays
By default, a current time stamp overlay is drawn on each video frame. Use the Edit button to customize this.

Use the mouse to select, position, and size existing elements. Buttons exist here to also Delete and “Send to back” in order to manage object which may be layered in some way.

Use the Add text/time button to add a new text overlay:

You may choose a preset time or date macro or enter your own. Please see the topic at the end of the Action Alerts chapter for a complete list of possible macros.

Opacity and shadowing may increase the CPU time required to render the text. You may select to only draw the overlay when specific global DIO bits are set.

Use the Add image button to add a graphic overlay:

A PNG file has a built-in transparency channel. Other formats such as JPG and BMP are supported, but transparency for these involves color substitution—you select the color which should be rendered as transparent.

As with text overlays, the use of opacity and transparency here will result in higher CPU demand.

Select to constrain proportions to maintain the object’s “natural” aspect ratio. Un-select this option to be able to size the object as you please.

You may select to only draw the overlay when specific global DIO bits are set.

GENERAL SETTINGS
Camera name considerations were discussed at the beginning of this chapter. Groups will be discussed in a topic to below. However, several other important settings may be found on the General page:

You may assign an Event color to the camera. This will be used in the clips list to highlight clips recorded from this camera. It’s also used in the timeline view to create tracks, where cameras with the same color a grouped together onto a single track.

Notes exists here for your benefit only, and are not used otherwise by the software.
You may un-check the Enabled option to disable the camera. You will find an option in the

main window UI right-click menu to Hide disabled cameras.
You may mark a camera as Hidden, however it may still be visible in the main window UI

with a right-click menu option to Show hidden cameras.

You may select during which active global profiles the camera is itself Active. Whether or not the camera is active will further be determined by the Schedule page here in camera settings. Inactivecamerasdonothingbutdisplayvideo,unlessthattooisdisabledonthe Schedule page.

Until a proper dedicated Amazon Echo app can be developed, beta functionality exists to determine what Echo is able to set concerning this camera. Options include Enable, Select, etc.

Status
This page displays several vital statistics regarding the camera’s streaming. These values are also visible on the Cameras page in Status.

If the camera is actively pushing to a Windows Media or Flash server, that connection status is visible here. These features are discussed on the Webcast page.

The blue bar on this page will have icons displayed which contribute to the global status window at the bottom of the main window UI.

Export and Import
You may save the camera’s settings to a .REG file, and then later import that same file to restore its settings. This is one way to “copy” settings from one camera to another as well.

There are buttons to export and import all software settings (including all cameras) at once on the About page in Settings.

Keyboard shortcut
Use the button to Set KB shortcut in order to assign a keyboard key combination to this camera. When these keys are used, the camera will be selected. The selected camera may be “soloed” where it is visible alone (an icon at the top of the main window UI controls this feature), or it may be used to determine the source of live audio (see Cameras page in Settings). The camera’s group may be configured for “auto-mixing mode” where the selected camera is used for the group stream, potentially useful in a demonstration or video production application.

THE CAMERA CONTEXT MENU
Right-click in a camera window to bring up this menu:

Many of these commands are available via UI buttons as well, and are described elsewhere. However many are available from here alone.

Camera windows may be moved to the desktop by dragging and dropping, or you may use the Open in desktop frame command.

Camera windows may be dragged/dropped to rearrange them in the main window UI, or you may select Move to first layout position. Additional frame commands are available such as Keep frame on top and Aspect lock.

Inactive cameras are not normally hidden, but you have the option to hide them if you also un-selected the option on the Schedule page to Continue to display and steam video when inactive.

De-select Show camera names to remove camera window headers and to make more space available for camera video.

Use Queue JPEG post now to immediately upload and image using the destination set on the Post page in camera settings.

For USB and analog cameras, addition Hardware property pages may be available.

Commands to set the camera’s DIO outputs and to send a reset command to the camera may be found on the PTZ/control menu.

SCREEN LAYOUT AND FRAMES
Right-click in the live cameras window to find the Layout menu:

You may choose to prioritize either 1, 2, or 4 cameras together in the top-left of the live video window. The remainder of cameras will be arranged to fit to the right and beneath these.

The ratio options 1:1 through 1:9 define the height of this group of cameras relative to the others. This ratio may also be adjusted by using the slider found at the top of the main window UI next to the group selection box.

A camera window open on the desktop is called a frame window. Additional cameras may be added to one frame window by dragging them into the frame. The entire frame may then be sized or positioned, possibly onto a secondary monitor. Individual cameras may be removed from the frame by right-clicking and un-checking the option to Open in desktop frame. Several options also exist on that menu to maximize, minimize, or close the entire frame (returning all cameras to the main window UI).

Blue Iris remembers the position of all frame windows when it is closed and restarted.

Rearranging camera windows
Use drag and drop within the main window UI or within a frame window to rearrange cameras. Many popular remote desktop solutions do not support drag and drop, so it may only be possible to do the this at the console directly.

Digital zoom
Use the mouse wheel over a camera window to zoom in digitally (the camera lens does not actually move). When zoomed in, the mouse cursor will become a “hand” icon and may be used to scroll around.

Use the mouse wheel again to zoom out, or you will find a Zoom out command on the right-click menu.

The sense of the mouse wheel may be reversed using a setting on the Other page in Settings.

CAMERA GROUPS
A camera group is used for viewing a subset of cameras. It’s also used to provide users access to a subset of cameras on the Users page in Settings.

A group is selected for viewing in the main window UI by using the selector box at the top of the window. The clips and timeline views are always filtered to only show items relevant to the selected camera group.

Adding and removing groups

The Groups field on the General page in camera settings is used to place the camera into one or more groups.

A group exists when at least 1 camera is a member. One caveat however is that a group will not appear for viewing remotely by the client app or browser unless it has more than 1 member camera.

A group is only deleted when all cameras are removed from the group.

A group should have a name unique from all camera short name to prevent conflict when requesting cameras and group streams remotely.

Group settings
Use the gears icon to the right of the group selection box to open group settings. A number of options here control how the group may be viewed remotely and how it and its cameras participate in auto-cycling.

Webcasting
By default, the group will be viewable via remote browser or app and its cameras will be arranged to fit into a rectangle roughly sizing each camera in half. You may instead force a specific size for this view. The default FPS (frames/second) for a group view is 10, however you may find that using a lower value such as 5 is sufficient, as this will save considerable CPU resources. Use the Fast scaling to save CPU option to override the global scaling setting on the Cameras page in Settings just for this stream.

If Limit decoding unless required is enabled for any of the cameras in the group, the option here to Require/decode all camera frames when streaming will provide a more fluid view of these cameras when viewing remotely.

You have the option of drawing the yellow borders around cameras currently in the triggered state with the Highlight borders option.

If you select to Enable camera auto-cycle stream, a second stream will be made available remotely, specifically for cycling through the cameras in the group. The group name used for this in URLs and internally is the ‘@‘ symbol followed by the group name.

You have the options to include or exclude Audio, Hidden cameras, and Inactive cameras (without video) in these remote views.

By default, the remote clients exclude clips for which there is no corresponding visible camera. However, you may override this behavior by selecting Include clips from excluded cameras.

Auto-Cycle
There are two types of auto-cycle, either individual cameras within a group, or a cycle through the groups themselves. Do not confuse this with PTZ preset cycle where the camera cycles through its PTZ preset positions.

Group cycle is initiated when you start from the All cameras group and you enable auto- cycle, and at least one other group exists and has enabled the option to Participate in group cycle. The All cameras group itself may or not participate. The software will cycle through each group in turn, pausing for the Auto-Cycle delay time.

Camera cycle is initiated when you start from any group other than All cameras, OR, there are no groups that have enabled the option to Participate in group cycle.

The option to Favor cameras or groups which are triggered or sensing motion will cause some cameras or groups to be skipped in the cycle if there are others where there is motion activity. Furthermore, there is an option to Only cycle if there is at least one camera triggered. In this case the software will display the camera group (in the case of camera auto-cycle) or the All cameras group (in the case of group auto-cycle) until a camera is triggered.

Note that in the case of camera auto-cycle, you must also select one camera in the group to begin the cycle unless Only cycle if there is at least one camera triggered is also used.

The option to Always switch to selected camera (manual mixing mode) is a specialized mode which may be used to manually switch between cameras in the group rather than in a timed cycle. Think of a video studio where the producer selects the camera to be “live” based on what’s happening in the scene or interview.

If auto-cycle does not begin, check to see if you may have the option to Only cycle when Cameras window is in full-screen mode selected on the Cameras page in Settings. Also, the option to Always solo selected camera on that page should generally always be enabled, otherwise the cameras will just be selected in turn rather than soloed.

TRIGGERING AND MOTION DETECTION
When a camera is triggered, you will see its border painted yellow and a lighting bolt icon appear in its header as well as in the main window status bar.

Settings on this page as well as on the Record and Alerts pages may change with the active profile. They may also be synchronized with another camera—please see that topic below.

Sources
The Motion sensor is the software’s original and most commonly used trigger source. It is discussed in a topic to follow.

DIO (Digital Input/Output) refers to electrical signals received either by a global DIO device such as a SeaLevel box or Arduino serial port.

The camera itself may have DIO terminals which may be used as a trigger source. Also classified as a camera DIO source for our purposes are signals received from ONVIF GetEvents PullPointSubscriptions.

Configured on the Watchdog page, the camera may be triggered when there is a loss of signal.

Configured on the Audio page, the camera may be triggered when there is sound of sufficient amplitude and duration.

It’s possible that this camera is triggered in response to the trigger on another camera, and this is called a Group trigger.

The camera may be triggered by other External means, possibly via menu command or action set executed for another purpose.

When triggered
When a camera is triggered, most commonly recording may begin, an “alert image” may be captured and alert actions may be executed. These are configured on the Record and Alerts pages. However, there are other possible trigger responses:

All cameras in one or more camera groups may be triggered simultaneously.

You may want to restore/focus the main window UI if it has been minimized. There’s a related setting on the Cameras page in Settings to also move the window to the foreground if it has been moved behind other windows.

You may want to move the camera to a particular PTZ preset position, along with all other cameras in selected groups.

You can link Flash or Windows Media webcasting with a camera’s triggered state. The software will push video to one of these services only when the camera is triggered. Note that this means no video will be pushed when the camera is not triggered.

It’s possible to have a motion sensor trigger first analyzed by an external service prior to executing any alert actions. Blue Iris has partnered with Sentry Smart Alerts for “person detection” as an initial AI provider. With this technology, the alert image is sent to their server for analysis. Only if it is determined to contain a person are alerts fired; if not, the trigger and its alert actions are considered cancelled and alert images are marked accordingly in the clips and timeline views. Click the learn more link for more information.

Additional services for AI people, object, vehicle, license plate recognition, both built-in and external will be offered as version 5 development continues.

Break time
The Break time is simply the duration of the triggered state. If there is no additional triggering during the break time, the trigger state will expire and recording will stop (unless otherwise configured on the Record page).

THE MOTION SENSOR
The motion sensor is the most commonly used method for triggering the camera to record and to fire alerts. The software may consider overall “change” in the image from frame to frame, or it may attempt to isolate “objects” and track them for movement.

Basic
There is some persistence or hysteresis involved to reduce noise, but at its core, the motion sensor simply counts the number of changed pixels from frame to frame.

The object size and contrast may be considered thresholds. This is the amount of change that must occur to be considered motion—contrast for individual pixels, and object size the number of overall pixels that must be changing.

By moving either slider to the left, you will make the motion detection more sensitive, as there will be lower thresholds to overcome. The software attempts to show this visually with a small box in the center of a camera preview window to represent the minimum object size. There may also be a larger rectangle surrounding this—this is the maximum object size as set under Object detection below. This image is updated in realtime—if someone were to walk through the scene, an additional rectangle is drawn to represent the amount of actual motion. If this realtime rectangle is larger than the minimum and smaller than the maximum, the camera is considered to be sensing motion.

It’s not until the motion sensing persists for a time specified by the minimum duration or “make time” that the camera is actually triggered.

You may select to have motion pixels and/or object rectangles drawn onto each frame of video by using the Highlight option. Highlighting may be limited to frames where the camera is triggered. You may also black-out areas of the image which are specifically excluded from any motion zone (those are discussed below).

Object detection
The software can use an algorithm to attempt to identify rectangular groups of changing pixels to classify them as objects. When highlighting is enabled, objects are draw with blue boxes, yellow when they reach triggering threshold.

To force a trigger, in addition to existing for a specific amount of time (the motion sensor’s make time), an identified object also must travel a specific number of pixels, set here. The center point of the object is used for this purpose.

You may require that an object also cross specific zones in order to trigger. Zones are discussed below. They have letters A-H, where H is the special “hot spot” zone. There is a special syntax to use when specifying zone crossing:

A Object must only have entered zone A
AB Object must have entered both zones A and B, overlapping or not A>B Object must start in zone A, traveling to zone B
A-B Object must travel between zones A and B in either direction AB>C Object must travel from zone A and B to zone C

When two zone letters are used together, this is an and condition. The object must have existed in both zones (not necessary simultaneously) before possibly moving to a third. It is possible to specify more than two zones together, for example ABC>D.

Or conditions may be specified by adding multiple zone crossing rules separated by commas.

A maximum object size may be specified to reset the motion sensor. This object size is represented as the larger blue rectangle on the image preview back on the motion sensor page. This is useful to filter out very large changes possibly due to lighting or camera movement (a scene change).

Zones and hot spot

Zones allow you to identify parts of the image for consideration by object detection. They are also noted with an alert image in the clips list so that you may know more specifically the source of motion.

In an opposite sense, any part of the image not covered by a zone is essentially masked—that is, not considered for motion at all.

By default, the entire image is Zone A. If your intention is to simply mask out parts of the image, you may remove elements of the image from Zone A by holding the Control key and drawing rectangles. You may instead Invert or Clear the image and then re-draw areas to be monitored.

Each zone is displayed and manipulated in turn as it is selected at the top of the window. Areas which are part of other zones are shown with hatch marks for your convenience. Here’s an example where Zone B is being drawn. Zone A is shown with hatch marking in yellow. Zone H (the Hot spot zone) is sown with red hatch marking:

Zones may or may not overlap. This does not affect use for masking, but it may affect the way that object detection functions. An object is considered to have been in or traveled to a zone if any part of that object touches the zone.

Of important note, a motion detection object as used by object detection and tracking must always exist in one or more zones. This means there must be continuous zone coverage through areas where an object is being tracked. Instead of worrying about zones overlapping or abutting, it is much easier to just setup one zone to be used as an overall “mask” and then draw additional zones to be used with object detection and tracking. For example, if zone A is left as representing the entire image, you may then draw smaller zones B and C anywhere on the image, and then apply an object tracking rule B>C without consideration of how those zones are aligned or spaced.

As it is possible to define multiple zone maps, perhaps for different active profiles or for recently used PTZ presets, and it’s possible to invalidate the zone map using PTZ movements, the currently active zone map is identified for your information on the bottom of the motion sensor page.

The Hot Spot zone
This is a zone for special applications only and should not be generally used. Objects or motion in this zone will override the make time and other rules and force an immediate trigger. This can result in a huge number of false triggers, so it should be used with caution.

Opposite sense
A very interesting feature, this feature reverses the function of the motion detector, so that it may be used to identify the absence of motion. You may be interested in the movements of a child or an elder for example.

When using this feature, you will want to set the minimum duration or make time of the motion sensor to a much higher value than the default of 1 second. If you specify 60 seconds for example, the camera will be triggered for recording and alerts if there is no motion in any 60 second period of time.

More advanced motion sensor options
A number of additional settings may be used to fine-tune or further enhance motion sensor accuracy.

The Black and white and Cancel shadows options alter contrast calculations. Black and white attempts to simply remove the effect of color differences. In order to cancel shadows, higher contrast may be required between neighboring pixels.

By default to save CPU and smooth-out noise, the image is reduced by considering it in blocks. The High definition option actually increases the number of motion detection blocks that are used by typically 4x.

You may select to use either a simple or Gaussian Algorithm. The Gaussian algorithm uses slightly more complex heuristics for tracking pixel changes over time, possibly helping to reduce false positives, but at a slight increase in CPU demand.

ALERTS

OK the camera has been triggered. Now what? In addition to recording, you may fire any number of actions.

Settings on this page as well as on the Record and Trigger pages may change with the active
profile. They may also be synchronized with another camera—please see that topic below.

When to fire
By default, the actions defined for this alert are only fired when this camera is triggered.
However, you may want one camera to represent all cameras in a group, or all cameras on
the system in this regard.
The alert may be filtered to only occur with specific types of triggers. An furthermore for a
motion sensor trigger, you may require that specific zones where entered in conjunction with
the object detection and tracking feature.
By default, an alert occurs only at the leading-edge of the camera’s trigger state. If the camera
is triggered again while it is already in a triggered state, this is called a re-trigger. If you enable
the option to alert for re-triggers as well, this will result in more alerts overall.

Timers
Timers exist to both reduce the number of false-positive alerts as well as to reduce the
overall alert frequency as well.
The Allow disarm time by delaying alerts setting basically gives you time to prevent an
alert, perhaps as you enter the home or building where the camera would normally be
triggered. If you are using the Sentry Smart Alerts, this is basically what is employed—the
camera still triggers for recording, but the alerts are delayed until the Sentry service makes a
determination on the accuracy of the detection.
One measure to prevent false-alerts is to employ the Wait until triggered at least X times
feature. Generally if there’s a break-in or anything else of concern, this will cause multiple
triggers over a short period of time. This feature allows you to ignore short-lived motion
events which may be of no consequence.
The Minimum time between alerts is used simply to reduce the number of consecutive
alerts. Following an alert, you may not be interested in receiving alerts in quick succession
as motion and triggering continue.
You may also consider the Minimum and maximum time since a trigger on another
camera or group of cameras. If multiple cameras cover a specific area for example, you may not want them all to fire alerts at the same time—the minimum time since a trigger on
another camera will prevent this. The maximum setting works in the opposite way—the
alert will be fired only if another camera or group has been recently triggered within the time
you specify. It’s possible to use the minimum setting without the maximum setting by
setting maximum to 0.

Actions
An action set may be defined for both trigger and trigger reset. Please see the Alerts and
Actions chapter for details.

SCHEDULE AND EVENTS
The global schedule has already been covered in the Shield, Profiles and Schedules chapter.
However, although not generally recommended as it makes keeping the active profile
straight much more difficult, it’s possible to override the schedule on a per-camera basis:

By default, an override of the global schedule (whenever the green “play” icon at the top of
the main window UI is replaced with a stop or pause icon) will also override this camera
schedule. You may override this behavior with the setting here to Also ignore manual
global profiles overrides.
By default, when the camera schedule shows inactive (profile 0, clear), the camera is inactive. However, you may override this so that the effective camera profile becomes the
active global profile whenever it would otherwise be Inactive.
Whenever the camera’s active profile differs from the globally active profile, that number
is shown at the top-left of the camera’s header in the live video window.
There are options here to force the camera into the inactive state (profile 0) unless the
camera is currently externally triggered, being viewed remotely, or is in full-screen
display.
By default, the camera continues to display and stream live video when inactive.
However if you’d rather see a gray window with an Inactive message, this is an option by
unchecking this box.
If you place a file inactive.jpg in the Blue Iris program folder under subfolders cameras and the
then the camera short name, this will be displayed when the camera is inactive and not
displaying video.

Events
Somewhat hidden on the Schedule page is the option to set a timed events list for the
camera.

Events are executed at the specified times throughout the day. By selecting the Search-back
at startup/reset
option, the software will send the most recent event of each type when the
software is first started or the camera is reset. That is, if you have a PTZ preset position 0 at
12 noon, and a PTZ preset position 1 at 12 midnight, and the software is started at 3pm, the
PTZ preset position 0 command will be send to the camera. Event search-back also applies when using the option on the PTZ page to resume cycle/schedule after a period of time This allows you to use manual PTZ control, yet return to a normally schedule PTZ preset position after a period of time without manual PTZ control.

When adding or editing an event you have these options:

If the time is set with proximity to either sunrise or sunset, you may instruct the software to
maintain this offset and move the event accordingly as the seasons change. Sunrise and
sunset are only accurate if you have set your latitude and longitude on the Schedule page in
global Settings.
Each event may be set to fire only when specific profiles are active.

WEBCASTING
If you want this camera to be visible for remote clients, browsers and apps, leave this
Enabled.

Direct-to-disc recording (by definition) does not include video overlays from Blue Iris such
as the date and time. However, you may select here to automatically draw these items to
captured video frames as they are served to remote clients.

Windows Media
Although largely deprecated at this point(even by Microsoft), it is still possible for the
software to push a video stream to a Windows Media Server or to serve a Windows Media
stream on a specific port (users “pull” from this port). Options exist here to select the
maximum number of viewers for pull or to specify the username and password for a push
server. Windows Media encoding options are possible:

Flash Media Live Encoding
The RTMP protocol may be used to push video to a Flash Media server. There are many
popular services for this such as YouTube and Ustream (a paid service), or you may have your
own server located on premises or elsewhere.

If you are hosting a video stream with a number of viewers, it will be more efficient (and maybe only possible) to serve the video in this way, rather than having a large number of clients attempting to connect to your Blue Iris server directly.
Users have reported that the streaming is more stable to these servers if audio is included,
regardless of whether it is just silence or not.
Use the Configure button to define the required bit rate and frame size and other encoding
parameters as required by your server.

IMAGE POSTING
This feature may be used to periodically upload images to a web space or to a local folder.

You may select the Quality and Size for the images generated. You may use &CAM for the
Filename as well as other standard time formatting codes as documented in the Alerts and
Actions chapter. The .jpg suffix is automatically appended and should not be included here.
When you request a Ring of images, a number is appended to the filename in sequence
beginning with 0 and the software replaces previous images in a round-robin fashion. If you
also select the preserve time order feature, the newest image is always number 0, then 1,
etc. Use this with some caution however, as it involves a rename operation on the server for
each file after each upload, and this can take significant time to complete.
Images may be uploaded on a timed basis, and/or only in response to a camera trigger or
use of the snapshot command. It’s also possible to queue an upload manually via a
command on the camera’s right-click menu.
By default, images are only uploaded when the camera has a signal. If you would like to
continue posting images through no-signal periods, an option is given for this.
You may have the software upload files for you automatically upon software (or camera)
startup and shutdown. All files found in the startup and shutdown folders in the Blue Iris
program file folder under a subfolder cameras and the camera’s short name will be uploaded.
Please be aware that this may delay a software (or camera) shutdown as these files are being
uploaded.

WATCHDOG
The Watchdog function waits for the camera to go offline and then replaces the image with a
no signal message, possibly with the last known image from the camera as well.

The timeout period defines the duration of time before the watchdog kicks in. The
software will then automatically attempt to restart the camera stream by reconnecting to the
network IP camera. If a number of timeouts occur in succession, you may choose to
completely restart the camera window and/or send a reboot command to the camera as
well. Of course, depending on how or why the camera is offline, it may not receive that
command.
The option exists to Delay motion detection for a period of time following the timeout
period. This prevents potentially unwanted triggering that may result when the camera
image suddenly jumps forward in time.
More-so for analog cameras which may display a green or blue frame when there’s no signal,
the option exists to interpret that as a signal loss.
It’s possible to trigger the camera following a specified number of watchdog timeouts, and
the continue triggering until the signal is restored. This may be useful if you want
recording or image posting to kick-in, or to fire camera trigger alerts.
However if you want to fire specific alert actions in response to signal loss or reacquisition,
you may define a separate action set for each condition here instead.