Sunday, February 12, 2017

New toy - Raspberry Pi Camera v2.1


Capturing still images with the Raspberry Pi Camera v2.


After installing the ribbon cable and booting the Pi,

raspistill -o cam.jpg

the resulting image had resolution of 2592 x 1944

this corresponds to 5MP (the original RPI camera).

After an 

apt-get upgrade 
rpi-update


I am getting 8MP pictures, of  3280 x 2464 pixels size. This is what exiv2 is reporting:

File size       : 1701799 Bytes
MIME type       : image/jpeg
Image size      : 3280 x 2464
Camera make     : RaspberryPi
Camera model    : RP_imx219
Image timestamp : 2017:01:27 22:35:43
Image number    : 
Exposure time   : 1/8 s
Aperture        : F2
Exposure bias   : 
Flash           : No flash
Flash bias      : 
Focal length    : 3.0 mm
Subject distance: 
ISO speed       : 320
Exposure mode   : Aperture priority
Metering mode   : Center weighted average
Macro mode      : 
Image quality   : 
Exif Resolution : 3280 x 2464
White balance   : Auto
Thumbnail       : image/jpeg, 24576 Bytes



I can also issue
raspistill -o cam5.jpg --raw

The resulting image is still .jpg, but the file size has 12MB. The raw data comes after the regular jpeg data.

then, using the raw to dng tool https://github.com/illes/raspiraw
raspi_dng cam5.jpg cam5.dng



Nice.
Now, i just placed the camera on the top of the focuser tube (with eyepieces removed), and this is what the camera sees: The reflection of the primary and secondary mirrors.


Streaming video


For a quick preview, i found this methods simple to setup.
My "client" machine is a Linux Centos 7.
I found this method:
http://raspberrypi.stackexchange.com/questions/27082/how-to-stream-raspivid-to-linux-and-osx-using-gstreamer-vlc-or-netcat

I used another alternative, let Raspi start the video in listen mode
1.) On the RasPi start raspivid in 'listen' mode. This will wait for incoming connections
raspivid -t 0 -w 640 -h 480 -fps 30 -o tcp://0.0.0.0:5001 -l

2.) On my linux computer:
 nc raspberrypi 5001 | mplayer -fps 200 -demuxer h264es -

I packaged both commands on a single script which I can start from my Linux box, so I don't have to login to the RasPI
#!/bin/bash

# Video Streaming Preview from a Raspberry Pi, to a PC running Linux 
# Tested 2017-Jan, using raspivid Camera App v1.3.12

# a working, key-exchanged ssh setup is needed for this script to work

# 1.) On the RasPi, start raspivid to listen at port 5001, in background 

ssh pi@raspberrypi raspivid -t 0 -w 640 -h 480 -fps 30 -o tcp://0.0.0.0:5001 -l &

# some additional switches can be added to raspivid
# the -hf switch horizontally flips the image

# wait until the RasPi is ready listening
sleep 2


# 2.) On Linux (the box where this script is run), connect to the RasPi's port 5001 and play

nc raspberrypi 5001 | mplayer -fps 200 -demuxer h264es -

# one you stop the mplayer, the TCP connection is closed, and on the RasPi side also raspivid exits.


Mounting the camera on the telescope

I have a 130p/650 mm Newtonian telescope, with an 1.25" diameter eyepiece.
To achieve the so called 'prime focus', that is, to project the image from the secondary mirror directly to the sensor, without any optics involved, I removed the lens of the camera (counterclockwise rotation, some initial force applied).

I ordered a 3D-print of this thing which mounts perfectly the RasPi camera on the focuser:





Things to solve
- Where to place the camera when removed from the eyepiece? Maybe a magnetic support?
- How to focus?
- How to preview the stream when outside


Saturday, January 14, 2017

Live Preview Streaming from the Nikon 3200 using gphoto2 and ffmpeg


I tried to get a live preview streaming from my Nikon. I found some instructions.




gphoto2 has a command --capture-preview, which basically records the live feed from the camera in an mjpeg file. It can pipe out the movie to stdout.

On the same Linux Box (Centos7) I succeeded these


In one terminal window, started mplayer, to 'listen' to a TCP connection at port 5001
mplayer -demuxer mpegts 'ffmpeg://tcp://127.0.0.1:5001?listen'

On another window, I started the capture and sent it to the port

gphoto2 --capture-movie --stdout | ffmpeg -f mjpeg -i pipe:0 -r 20 -vcodec libx264 -pix_fmt yuv420p -tune zerolatency -preset ultrafast -f mpegts "tcp://127.0.0.1:5001"

This seem to work, i was able to see in the mplayer window the video coming from the camera.
There is an inconvenience, there is a considerable lag.


Another option is to use ffserver. This way, multiple client players (including web browsers) can connect.
The idea is that, gphoto2 -> ffmpeg provides the live feed, while ffserver does the 'broadcast'.
For ffserver, a ffserver.conf needs to be provided, here is a custom one.

Each stream was marked with NoAudio

HttpPort 5001
MaxHTTPConnections 200
MaxClients 100
MaxBandwidth 1000000

<Feed feed1.ffm>
File /tmp/feed1.ffm 
FileMaxSize 5M 
</Feed>



<Stream test.mjpg>
    Feed feed1.ffm
    Format mpjpeg
    VideoSize 640x480
    VideoFrameRate 15
    VideoBitRate 1024
    VideoIntraOnly
    NoAudio
    Strict -1
</Stream>


<Stream live.ts>
 Format mpegts
 Feed feed1.ffm
 VideoCodec libx264
    VideoSize 640x480
 VideoFrameRate 25
 VideoBitRate 1024
    VideoGopSize 5
 PixelFormat yuv420p
    NoAudio
</Stream>


# Flash

<Stream test.swf>
    Feed feed1.ffm
    Format swf
 VideoCodec flv
    VideoSize 640x480
 VideoFrameRate 25
 VideoBitRate 1024
    VideoGopSize 5
 PixelFormat yuv420p
    NoAudio
</Stream>

<Stream stat.html>
Format status

# Only allow local people to get the status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255

</Stream>



To start the server, type
ffserver -f your-ffserver.config

One the server is up and running, start the live feed
gphoto2 --capture-movie --stdout | ffmpeg -f mjpeg -re -i pipe:0 http://localhost:5001/feed1.ffm


At this point, you can point your browser to
http://localhost:5051/stats.html. You will see the links to the streams.

References

https://trac.ffmpeg.org/wiki/StreamingGuide
https://trac.ffmpeg.org/wiki/ffserver

Sunday, January 8, 2017

Astrophotographic Stacking


I'm new to astrophotography. I've read about techniques such as image registration and aligning, and, I would like to try out some basic processing.
The steps which I describe are not a professional guidance. This blog post is merely a learning note, for myself. I'm exploring different tools and learning to use them.


On a winter night, in 2017, I shoot about 30 frames of the Orion, using a Nikon D3200, with an 50mm F1.8 lens, at ISO 800, 1.3 sec exposure.

The raw (.nef) images are not impressive, they look like this:


Now, let's do some ImageMagick:
First, I converted them to jpeg and normalized the levels, to have something visible.
I cropped only a HD video-sized middle region for illustration purposes.

mkdir -p jpegs
swidth=6036
sheight=4013
twidth=1920
theight=1080
offsx=$(( (swidth-twidth)/2 ))
offsy=$(( (sheight-theight)/2 + 500 ))
convertopts="-crop ${twidth}x${theight}+${offsx}+${offsy} -normalize"
for f in *.NEF 
do
    outfile=jpegs/${f%.NEF}.jpg
    if [ ! -f $outfile ]
    then
        sem -j 2 convert ${f} $convertopts $outfile
    fi
done
sem --wait
cd jpegs
ffmpeg -framerate 2 -pattern_type glob -i \*.jpg -b:v 1000000 -y movie.mp4


Note, that level normalization shouldn't be done here, but i'm interested mainly on understanding the process.
Due to the simple normalization, the noise is enhanced much, as seen below:

As you see, the stars are drifting out of the field of view, since i used a simple, non-tracked tripod for the camera.
The very last frame was taken with the lens cap on. It's a 'dark' frame, and will be used to subtract it from the other images, to remove some background noise.

Note, the 'dark' frame, is really dark when the image is viewed under normal parameters. What you see in the video is the heavily amplified  noise.

Now, enter Siril

Siril is meant to be Iris for Linux (sirI-L). It is an astronomical image processing tool, able to convert, pre-process images, help aligning them automatically or manually, stack them and enhance final images.
I use Siril to preprocess (extract dark frame), align (so stars don't drift away), and stack images.

First, images are converted from .nef to .fit, the image format used by Siril (and other astronomical imaging software).


Then, the preprocessing results are stored as pp_*.fit. Here is video of the green channel only
(for some reason, image magick converts from .fit to 3 distinct jpeg files, one per channel).




Align images.
Using the "Global star alignment" algorithm, on the green channel.
This video shows the registered sequence. As you see, it's not perfect, there are some minor movements




Once the images are aligned, lets combine them into a single image. Siril offers multiple algorithms, I choose the Median Filtering first.
19:16:24: Background noise value (channel: #0): 19.948 (3.044e-04)
19:16:24: Background noise value (channel: #1): 11.986 (1.829e-04)
19:16:24: Background noise value (channel: #2): 13.969 (2.132e-04)

This is the resulting image

And this is the cropped region, with levels normalised.


Satisfactory - not yet. But it's a start.
*
* *
To be continued.