Monthly Archives: October 2014

PORT or STARBOARD, PIC I/O registers ahoy.

Single color detection using OpenCV

ZC's blog

This blog has been stopped and transferred to https://linzichun.com in June 2018.  some history posts are hidden!

—————————————————————–

After a few days of learning OpenCV, I started to write my first OpenCV program, the requirement is to detect single color and find three largest color objects, it’s a very basic program, but you can know a lot about computer vision if you understand the code. OpenCV provide so many functions, you have to know what you want and what the function does.

This program still run on the Raspberry Pi, frame per second is about 8.5, if you want to find more or less than 3 biggest objects, you can change N. Here is code(You also can get it in my Github) sg_color.c:

Here is makefile:

LIBS= `pkg-config --libs opencv` CFLAGS= `pkg-config --cflags opencv` objects= sg_color.o sg_color: $(objects) gcc $(LIBS)$(CFLAGS) -o sg_color $(objects) .PHONY: clean…

View original post 11 more words

Visual Intelligence : Human vs Machine

Blog

Ever wondered how visually intelligent our brain is ? or How much has machine vision achieved in mimicking human vision till now ? Lets start by observing a picture

Human Vision (1) Human Vision (1)

Each of these children is observing the world surrounding him/her. They can identify shape and color of various patches in the room. They can also classify objects, the actions of teacher and most importantly ,identify the visually related social behavior of objects in the environment.

By the age of two, our visual cortex becomes so well trained that we can understand any scene without rationalizing its pixel space. This becomes clear from the following example :-

Famous Ponzo Optical Illusion Famous Ponzo’s Optical Illusion

Observe how quickly your brain understands the scene , albeit it falters in guessing that the size of three vehicles are equal. On the other hand, machine vision is still in its infancy. There are algorithms that accurately…

View original post 157 more words

Researchers are using deep learning to predict how we pose. It’s more important than it sounds

Gigaom

A team of New York University researchers that includes Facebook AI Lab Director Yann LeCun recently published a paper explaining how they built a deep learning model capable of predicting the position of human limbs in images. That field of computer vision, called human pose estimation, doesn’t get as much attention as things like facial recognition or object recognition, but it’s actually quite difficult and potentially very important in fields such as human-computer interaction and computer animation.

Computers that can accurately identify the positions of people’s arms, legs, joints and general body alignment could lead to better gesture-based controls for interactive displays, more-accurate markerless (i.e., no sensors stuck to people’s bodies) motion-capture systems, and robots (or other computers) that can infer actions as well as identify objects. Even in situations where it’s difficult or impossible to see or distinguish a part of somebody’s body, or even an entire side, pose-estimation systems should be smart…

View original post 381 more words