CSCI 250: Introducing OpenCV

CSCI 250 Lab #2: Introducing OpenCV

Goal / Turnin

The goal of this lab is to get familiar with OpenCV, the popular computer-vision package that is the second major technology that you will use in your final project for this course. As in yesterday’s lab, you will finish by demonstrating your program to me. Then one of your team members will upload your code to your team’s github repository, so that the entire team can get credit for completing the lab. Since this lab doesn’t require any special equipment (other than a webcam like the one in your laptop), you should also feel free to work independently if you prefer, and submit a file with your name in the filename.

Part 1: (Install and) import OpenCV

As we did with Arduino yesterday, I encourage you to install the software (in this case, opencv-python) on your laptop. If you’re not familiar with installing Python packages, we’ll do it together; otherwise, feel free to try the installation on your own. I’ve also installed this package on all the machines in the classroom. For running Python on these classroom machines, you should open up a terminal window and run /usr/bin/python3, which will ensure that you access the correct version of Python.

Once you’ve installed opencv-python (or want to run it on one of the classroom machines), you can check its availability by opening your favorite Python shell (IDLE, Spyder, VSCode, …) and typing import cv2. Again, on the lab machines you should launch Python by running /usr/bin/python3 in a terminal window.

Part 2: Live image capture

What HelloWorld is to intro programming, and Blinky is to Arduino, live image (video) capture is to OpenCV. As usual, a bit of googling should reveal a simple program that will capture and display live video from your laptop’s webcam (or from a USB camera I can provide). Copy/paste that code into a file blob.py, and add a comment at the top providing the URL where you got it. Once you’ve got that working you’re ready to move on to the next step ….

Part 3: Blob detection

OpenCV is a rich software package offering a variety of algorithms, including face recognition, video manipulation, and more.  For our projects in this course we will use one such algorithm, blob detection, which attempts to locate visually coherent objects (“blobs”) in an image, typically by finding a patch of coherent color.  

As in the previous part, you can find some color-blob-detection code through a little googling.  I recommend creating a separate program to do this with an image you read from a file, then merge that program with your live-capture program to produce your final program. Here is an image you can use to test your initial blob-detection code, which we can print out to test your final program.

The first usable google hit I got showed code that reads in a JPEG image and displays blobs in it.  By combining this code with my capture program (being sure to mention the URL for the blob code), I got a program that displays blobs in live-captured video.  (Hint: There are typically three different variable names that people use for an image: image, img, and frame).  Perhaps you will be luckier and find code that already does all this!   In either case, your team’s turnin for this lab will be your blob.py, which will display blobs in a live-captured image.  Specifically, your program should track the location of the blue blobs and display the camera image with a little marker (like a circle) in the blue blobs’ centers. Once you’ve figured out the trick, add a brief comment in your code about how it works.