Saturday, January 14, 2017

Live Preview Streaming from the Nikon 3200 using gphoto2 and ffmpeg


I tried to get a live preview streaming from my Nikon. I found some instructions.




gphoto2 has a command --capture-preview, which basically records the live feed from the camera in an mjpeg file. It can pipe out the movie to stdout.

On the same Linux Box (Centos7) I succeeded these


In one terminal window, started mplayer, to 'listen' to a TCP connection at port 5001
mplayer -demuxer mpegts 'ffmpeg://tcp://127.0.0.1:5001?listen'

On another window, I started the capture and sent it to the port

gphoto2 --capture-movie --stdout | ffmpeg -f mjpeg -i pipe:0 -r 20 -vcodec libx264 -pix_fmt yuv420p -tune zerolatency -preset ultrafast -f mpegts "tcp://127.0.0.1:5001"

This seem to work, i was able to see in the mplayer window the video coming from the camera.
There is an inconvenience, there is a considerable lag.


Another option is to use ffserver. This way, multiple client players (including web browsers) can connect.
The idea is that, gphoto2 -> ffmpeg provides the live feed, while ffserver does the 'broadcast'.
For ffserver, a ffserver.conf needs to be provided, here is a custom one.

Each stream was marked with NoAudio

HttpPort 5001
MaxHTTPConnections 200
MaxClients 100
MaxBandwidth 1000000

<Feed feed1.ffm>
File /tmp/feed1.ffm 
FileMaxSize 5M 
</Feed>



<Stream test.mjpg>
    Feed feed1.ffm
    Format mpjpeg
    VideoSize 640x480
    VideoFrameRate 15
    VideoBitRate 1024
    VideoIntraOnly
    NoAudio
    Strict -1
</Stream>


<Stream live.ts>
 Format mpegts
 Feed feed1.ffm
 VideoCodec libx264
    VideoSize 640x480
 VideoFrameRate 25
 VideoBitRate 1024
    VideoGopSize 5
 PixelFormat yuv420p
    NoAudio
</Stream>


# Flash

<Stream test.swf>
    Feed feed1.ffm
    Format swf
 VideoCodec flv
    VideoSize 640x480
 VideoFrameRate 25
 VideoBitRate 1024
    VideoGopSize 5
 PixelFormat yuv420p
    NoAudio
</Stream>

<Stream stat.html>
Format status

# Only allow local people to get the status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255

</Stream>



To start the server, type
ffserver -f your-ffserver.config

One the server is up and running, start the live feed
gphoto2 --capture-movie --stdout | ffmpeg -f mjpeg -re -i pipe:0 http://localhost:5001/feed1.ffm


At this point, you can point your browser to
http://localhost:5051/stats.html. You will see the links to the streams.

References

https://trac.ffmpeg.org/wiki/StreamingGuide
https://trac.ffmpeg.org/wiki/ffserver

Sunday, January 8, 2017

Astrophotographic Stacking


I'm new to astrophotography. I've read about techniques such as image registration and aligning, and, I would like to try out some basic processing.
The steps which I describe are not a professional guidance. This blog post is merely a learning note, for myself. I'm exploring different tools and learning to use them.


On a winter night, in 2017, I shoot about 30 frames of the Orion, using a Nikon D3200, with an 50mm F1.8 lens, at ISO 800, 1.3 sec exposure.

The raw (.nef) images are not impressive, they look like this:


Now, let's do some ImageMagick:
First, I converted them to jpeg and normalized the levels, to have something visible.
I cropped only a HD video-sized middle region for illustration purposes.

mkdir -p jpegs
swidth=6036
sheight=4013
twidth=1920
theight=1080
offsx=$(( (swidth-twidth)/2 ))
offsy=$(( (sheight-theight)/2 + 500 ))
convertopts="-crop ${twidth}x${theight}+${offsx}+${offsy} -normalize"
for f in *.NEF 
do
    outfile=jpegs/${f%.NEF}.jpg
    if [ ! -f $outfile ]
    then
        sem -j 2 convert ${f} $convertopts $outfile
    fi
done
sem --wait
cd jpegs
ffmpeg -framerate 2 -pattern_type glob -i \*.jpg -b:v 1000000 -y movie.mp4


Note, that level normalization shouldn't be done here, but i'm interested mainly on understanding the process.
Due to the simple normalization, the noise is enhanced much, as seen below:

As you see, the stars are drifting out of the field of view, since i used a simple, non-tracked tripod for the camera.
The very last frame was taken with the lens cap on. It's a 'dark' frame, and will be used to subtract it from the other images, to remove some background noise.

Note, the 'dark' frame, is really dark when the image is viewed under normal parameters. What you see in the video is the heavily amplified  noise.

Now, enter Siril

Siril is meant to be Iris for Linux (sirI-L). It is an astronomical image processing tool, able to convert, pre-process images, help aligning them automatically or manually, stack them and enhance final images.
I use Siril to preprocess (extract dark frame), align (so stars don't drift away), and stack images.

First, images are converted from .nef to .fit, the image format used by Siril (and other astronomical imaging software).


Then, the preprocessing results are stored as pp_*.fit. Here is video of the green channel only
(for some reason, image magick converts from .fit to 3 distinct jpeg files, one per channel).




Align images.
Using the "Global star alignment" algorithm, on the green channel.
This video shows the registered sequence. As you see, it's not perfect, there are some minor movements




Once the images are aligned, lets combine them into a single image. Siril offers multiple algorithms, I choose the Median Filtering first.
19:16:24: Background noise value (channel: #0): 19.948 (3.044e-04)
19:16:24: Background noise value (channel: #1): 11.986 (1.829e-04)
19:16:24: Background noise value (channel: #2): 13.969 (2.132e-04)

This is the resulting image

And this is the cropped region, with levels normalised.


Satisfactory - not yet. But it's a start.
*
* *
To be continued.