Cognitive science
VideoIQ: Demystifying Self-Learning Video Analytics
Source: VideoIQ | Date: 11/13/2013 Related tags: VideoIQ , Video analytics When speaking of animate vision it is also referred to as response driven learning process where the lessons are learnt from others, learning through corrections and feedback, the more interactions with “teachers”, the faster the learning process. Response and feedback can also come from interactions with the environment. The typical example of a hot stove or a thorn, by injury it is identified as a mistake. Whether through concepts from bootstrap process or from response process, the learning process is continuous. When an object is “seen”, there is a complex and sophisticated neural network that is continuously learning behind the scenes that enables the gift of sight to be taken for granted. VideoIQ introduces the B.R.A.I.N model which will enable cameras that are programmed to mimic this neural network model, which “learns” through interactions with the environment. When a camera is first powered on, its field of view is new and unfamiliar, much like humans in an unfamiliar environment, instinctively attempts to identify everything that seems recognizable in form or function, overtime becoming familiar with the environment recognizing placement. Thus if additional items were to appear in the line of vision, the camera might first classify it as a suspicious object. However should the item be still in the environment over a period of time, the camera will “learn” that it is where it belongs, overriding the initial placement of items. Not only does it “remember” or “learn” […]
Source www.asmag.com