Sunday, September 7, 2014

Grasshopper3: Circular Buffer, Post Triggering, and Continuous Modes

Previously I have implemented a bare-bones RAM buffering program for the Grasshopper3 USB3 camera. The idea was to strip out all operations other than transferring raw image data from the USB3 port to RAM, so that the full frame rate of the camera can be buffered even on a Microsoft Surface2 tablet. While the RAM buffer is filling, no image conversion or saving to disk operations are going on. The GUI is running on another thread, and the image preview rate is held to 30Hz.

One-shot linear buffer with pre-trigger: 1) After triggering, frames are transferred into RAM (yellow). 2) RAM buffer is full. 3) Images are converted and saved to disk (not in real-time).
At the time I also tried to implement a circular buffer, where the oldest images are continuously overwritten in RAM. This allows for post-triggering, a common feature of high-speed cameras. The motivation for post-triggering is that the buffer is short (order of seconds) but you don't know when the exciting thing is going to happen. So you capture image data at full frame rate, continuously overwriting the oldest images in the buffer, until something exciting happens. The trigger can come after the action, stopping the image data capture and locking the previous N image frames in the buffer. The entire buffer can then be saved starting from the oldest image and ending at the trigger.

Circular buffer with post-trigger: 1) Buffer begins filling with frames. 2) After full, the oldest frame is overwritten with the newest. 3) A post-trigger stops buffering new frames and starts saving the previous N frames to disk.
It didn't work the first time I tried it; the frame rate would drop after the first time through the buffer. But I did a little code cleanup - now the first time through it constructs the array elements that make up the frame buffer, but on subsequent passes it assumes the structures are already in place and just transfers in data. This makes a wonderful flat-top trapezoidal wave of RAM usage that corresponds exactly with the allocated buffer size:

Post-triggering is not the only thing a circular buffer structure is good for. It can also be used as the basis for robust (buffered) continuous saving to disk. Assuming a fast enough write speed, frames can be continuously taken out of the buffer on a First-In First-Out (FIFO) basis and written to disk. I say "disk" but in order to get fast enough write speeds it really does need to be a solid-state drive. And even then it is a challenge.

For one, the sequential write speed of even the fastest SSDs struggles to keep up with USB3. To achieve maximum frame rate, the saved images must be in a raw format, both to keep the size down (one color per pixel, not de-Bayered) and to avoid having the processor bottleneck the entire process during image conversion. Luckily there is an option to just spit out raw Bayer data in 8- or 12-bit-per-pixel formats. IrfanView (my favorite image viewer) has a plug-in that is capable of parsing and color-processing raw images. The plug-in also works with the batch conversion portion of IrfanView, so you can convert an entire folder of raw frames.

The other challenge is that the operations required to save to disk take up processor time. In the FlyCap2 software that comes with the camera, the image capture loop has no trouble running at full frame rate, but turning on the saving operation causes the frame processing rate to drop on my laptop and especially on the MS Surface 2. To try to combat this problem, I did something I've never done before: actually intentionally write a multi-threaded application the right way. The image capture loop runs on one thread while the save loop runs on a separate thread. (And the GUI runs on an entirely different thread...) This way, a slow-down on the saving thread ideally doesn't cause a frame rate drop on the capture thread. The FIFO might fill up a little, but it can catch up later.

Continuous saving: Images are put into the RAM buffer on one thread (yellow) and removed from it to write to disk on another thread (green). This happens simultaneously and continuously as long as the disk write can keep up.
There's another interesting twist to the continuous-saving circular buffer: the frame rate in doesn't necessarily have to equal the frame rate out. For example, it's possible to buffer into RAM at 150fps but save only every 5th frame, for 30fps output. Then, if something exciting happens, the outgoing rate can be switched to 150fps temporarily to capture the high-speed action. If the write-to-disk thread can't keep up, the FIFO grows in size. As long as the outgoing rate is switched back to 30fps before the buffer is full, the excess FIFO elements can be unloaded later.

The key parameter for this continuous saving mode is the number of frames of delay between the incoming and the outgoing thread. The target delay could be zero, but then you would have to know in advance if you want to switch to high-speed saving. Setting the target delay to some number part-way through the buffer allows for post-triggering of the high-speed saving period, which seems more useful. I added a buffer graphic to the GUI to show both the target and the actual saving delay during continuous saving. My mind still has trouble thinking about when to turn the frame rate divider on and off, but I think it could be useful in some cases.

Here's some video I took to try out these new modes. It's all captured on the Microsoft Surface 2, so no fancy hardware required and it's all still very portable.

This is a simple test of the circular buffer at 1080p150 with post-trigger. The coin in particular is a good example of when it's nice to just leave the buffer running and post-trigger after a lucky spin gets the coin to land in frame.

More coin spinning, but this time using the continuous saving mode. Frames go into RAM at 150fps, but normally they are only written to disk at 30fps. When something interesting happens (such as the coin actually being in frame...), a burst of 150fps writing to disk is triggered. On the Surface 2, the write thread is slower than the read thread, so it can only proceed for a little while until the FIFO gets too full. Switching back to 30fps saving allows the FIFO to catch up.

Finally. a quick test of lower resolution / higher frame rate operation. At 480p, the frame rate can get up to 360+ fps. Buffering is fine at this frame rate (the overall data rate is actually lower). It actually doesn't require an insane amount of light either - the iPhone display is the only source of light here. You can see its IR proximity sensor LED flashing, as well as individual frame transitions on the display, behind the water stream. The maximum frame rate goes all the way up to 1100+ fps at 120p, sometime I have yet to try out.

That's it for now. The program (which started out as the FlyCapture2SimpleGUI source that comes with the camera) has a nice VC# GUI:

I can't distribute the source since it's derived from the proprietary SDK, but now you know it's possible and relatively easy to get it to capture and save efficiently with a bit of good programming. It was a fun project since I've never intentionally written interacting multi-threaded programs, other than maybe separating GUI threads from other things. I guess I'm only ten years or so behind on my application programming skills now...

Monday, May 5, 2014

Grasshopper3 Mobile Setup

I've now got a complete mobile setup working for the Grasshopper3 camera that I started playing with last week, and I took it for a spin during the Freefly company barbecue. (It's Seattle, so barbecues are indoor events. And it's Freefly, so there are RC drift cars everywhere.)

Since the camera communicates to a Windows or Linux machine over USB 3.0, I went looking for small USB 3.0-capable devices to go with it. There are a few interesting options. My laptop, which I carry around 90% of the time anyway, is the default choice. It is technically portable, but it's not really something you could use like a handheld camcorder. 

The smallest and least expensive device I found, thanks to a tip in last post's comments, is the ODROID-XU. At first I was skeptical that this small embedded Linux board could work with the camera, but there is actually a Point Grey app note describing how to set it up. The fixed 2GB of RAM would be limiting for buffering at full frame rate. And there is no SATA, since the single USB3.0 interface is intended for fast hard drives. So it would be limited to recording short bursts or at low frame rates, I think. But for the price it may still be interesting to explore. I will have to become a Linux hacker some day.

The Intel NUC, with a 4"x4" footprint, is another interesting choice if I want to turn it into a boxed camera, with up to 16GB of RAM and a spot for an SSD. The camera's drivers are known to work well on Intel chipsets, so this would be a good place to start. It would need a battery to go with it, but even so the resulting final package would be pretty compact and powerful. The only thing that's missing is an external monitor via HDMI out.

My first idea, and the one I ended up going with, is the Microsoft Surface Pro 2:

The Grasshopper3 take better pictures at 150fps than my phone does stills.
Other than a brief mention in a Point Grey app note, there wasn't any documentation that convinced me the Surface Pro 2 would work, but it has an Intel i5-4300 series, 8GB of RAM, and USB 3.0, so it seemed likely. And it did work, although at first not quite as well as my laptop (which is an i7-3740QM with 16GB of RAM). Using the FlyCapture2 Viewer, I could reliably record 120fps on the laptop, and sometimes if I kill all the background processes and the wind is blowing in the right direction, 150fps. On the Surface, those two numbers were more like 90fps and 120fps. Understandable, if the limitation really is processing power.

I also could not install the Point Grey USB 3.0 driver on the Surface. I tried every trick I know for getting third-party drivers to install in Windows: disabled driver signing (even though they are signed drivers), modified the .INF to trick Windows into accepting that it was in fact a USB 3.0 driver, turning off Secure Boot and UEFI mode, forcing the issue by uninstalling the old driver. No matter what, Windows 8.1 would not let me change drivers. I read on that internet thing that Windows 8 has its own integrated USB 3.0 driver, even though it still says Intel in driver name. Anyway, after a day of cursing at Windows 8 refusing to let me do a simple thing, I gave up on that approach and started looking at software.

The FlyCapture2 Viewer is a convenient GUI for streaming and saving images, but it definitely has some weird quirks. It tries to display images on screen at full frame rate, which is silly at 150fps. Most monitors can't keep up with that, and it's using processing power to convert the image to a GDI bitmap and draw graphics. The program also doesn't allow pure RAM buffering. It always tries to convert and save images to disk, filling the RAM buffer only if it is unable to do so fast enough. At 150fps, this leads to an interesting memory and processor usage waveform:

Discontinuous-mode RAM converter.
During the up slope of the memory usage plot, the program is creating a FIFO buffer in RAM and simultaneously pulling images out, converting them to their final still or video format, and writing them to disk. During the down slope, recording has stopped and the program finishes converting and saving the buffer. You can also see from the processor usage that even just streaming and displaying images without recording (when the RAM slope is zero) takes up a lot of processor time.

The difference between the up and down slopes is the reason why there needs to be a buffer. Hard disk speed can't keep up with the raw image data. An SSD like the one on the Surface Pro 2 has more of a chance, but it still can't record 1:1 at 150fps. It can, however, operate continuously at 30fps and possibly faster with some tweaking.

But to achieve actual maximum frame rate (USB 3.0 bus or sensor limited), I wanted to be able to 1) drop display rate down to 30fps and 2) only buffer into RAM, without trying to convert and save images at the same time. This is how high-speed cameras I've used in the past have worked. It means you get a limited record time based on available memory, but it's much easier on the processor. Converting and saving is deferred until after recording has finished. You could also record into a circular RAM buffer and use a post trigger after something exciting happens. Unfortunately, as far as I could tell, the stock FlyCapture2 Viewer program doesn't have these options.

The FlyCapture2 SDK, though, is extensive and has tons of example code. I dug around for a while and found the SimpleGUI example project was the easiest to work with. It's a Visual C# .NET project, a language I haven't used before but since I know C and VB.NET, it was easy enough to pick up. The project has only an image viewer and access to the standard camera control dialog, no capacity to buffer or save. So that part I have been adding myself. It's a work-in-progress still, so I won't post any source yet, but you can see the interface on this contraption:

Part of the motivation for choosing the Surface was so I could make the most absurd Mōvi monitor ever.
To the SimpleGUI I have just added a field for frame buffer size, a Record button, and a Save Buffer button. In the background, I have created an array of images that is dynamically allocated space in RAM as it gets filled up with raw images from the camera. I also modified the display code to only run at a fraction of the camera frame rate. (The code is well-written and places display and image capture on different threads, but I still think lowering the display rate helps.)

Once the buffered record is finished, Save Buffer starts the processor-intensive work of converting the image to its final format (including doing color processing and compression). It writes the images to a folder and clears out the RAM buffer as it goes. With the Surface's SSD, the write process is relatively quick. Not quite 150fps quick, but not far off either. So you record for 10-20 seconds, then save for a bit longer than that. Of course, you can still record continuously at lower frame rates using the normal FlyCapture2 Viewer. But this allows even the Surface to hit maximum frame rate.

All hail USB 3.0.
I just have to worry about cracking the screen now.
There are still a number of things I want to add to the program. I tested the circular buffer with post-trigger idea but couldn't get it working quite the way I wanted yet. I think that is achievable, though, and would make capturing unpredictable events much easier. I also want to attempt to write my own simultaneous buffering and converting/saving code to see if it can be any faster than the stock Viewer. I doubt it will but it's worth a try. Maybe saving raw images without trying to convert formats or do color processing is possible at faster rates. And there are some user interface functions to improve on. But in general I'm happy with the performance of the modified SimpleGUI program.

And I'm happy with the Grasshopper3 + Surface Pro 2 combo in general. They work quite nicely together, since the Surface combines the functions of monitor and recorder into one relatively compact device. The real enabler here is USB 3.0, though. It's hard to even imagine the transfer speeds at work. At 350MB/s, at any given point in time there are 10 bits, more than an entire pixel, contained in the two feet of USB 3.0 cable going from the camera to the Surface.

The sheer amount of data being generated is mind-boggling. For maximum frame rate, the RAM buffer must save raw images, which are 8 bits per pixel at 1920x1200 resolution. Each pixel has a color defined by the Bayer mask. (Higher bit depths and more advanced image processing modes are available at lower frame rates.) On the Surface, this means about 18 seconds of 150fps buffering, at most.

There are a variety of options available for color processing the raw image, and after color processing it can be saved as a standard 24-bit bitmap, meaning 8 bits of red, green, and blue for each pixel. In this format, each frame is a 6.6MB file. This fills up the 256GB SSD after just four minutes of video... So a better option might be to save the frames as high-quality JPEGs, which seems to offer about a 10:1 compression. Still, getting frame data off the Surface and onto my laptop for editing seemed like it would be a challenge.

Enter the RAGE.
USB 3.0 comes to the rescue here as well, though. There exist many extremely fast USB 3.0 thumb drives now. This 64GB one has a write speed of 50MB/s and a read speed nearing 200MB/s (almost as fast as the camera streams data). And it's not even nearly the fastest one available. The read speed is so fast that it's actually way better for me to just edit off of this drive than transfer anything to my laptop's hard drive.

Solid- I mean, Lightworks.
Lightworks seems to handle .bmp or .jpg frame folders nicely, importing continuously-numbered image sequences without a hassle. If they're on a fast enough disk, previewing them is no problem either. So I can just leave the source folders on the thumb drive - a really nice workflow, actually. When the editing is all done, I can archive the entire drive and/or just wipe it and start again.

While I was grabbing the USB 3.0 thumb drive, I also found this awesome thing:

It's a ReTrak USB 3.0 cable, which is absolutely awesome for Mōvi use - even better than the relatively flexible $5 eBay cable I bought to replace the suspension bridge cable that Point Grey sells. It's extremely thin and flexible, so as to impart minimum force onto the gimbal's stabilized payload. I'm not sure the shielding is sufficient for some of the things I plan to do with it, though, so I'll keep the other cables around just in case...

That's it for now. Here's some water and watermelon:

Monday, April 21, 2014

New Camera...thing.

After being completely surrounded by new cameras for a weekend at NAB 2014, I decided to do a little shopping around to possibly update my video camera. My Panasonic HDC-SD60 has served me well, especially with its 20x optical zoom and image stabilization. Pretty much every video on my site for the last few years has been from that camera. (This one is maybe my favorite, and shows its pretty decent low-light performance as well.) But there is so much new video camera technology now that I couldn't resist the urge to do some research into video cameras in the $1-2k price range.

Last year the Blackmagic Pocket Cinema Camera (BMPCC) was annouced at NAB and at the time I thought I might be interested in getting one. It's a really awesome camera because of its size and ability to shoot raw, wide dynamic range HD video, at a reasonable price. My only worry was that is was small enough that I might try to fly it on a smaller-than-adequate multirotor and crash. There's also the Panasonic GH3 (and new 4K GH4 coming soon), which is well-known for its extremely good video quality. It has the same interchangeable lens format (MFT) as the BMPCC.

But I also really like the camcorder format - specifically, a built-in zoom lens and optical stabilization. The BMPCC and GH3/GH4 (and other dSLRs that are out of my price range) have the advantage of large-format sensors that can collect lots of light, something that most camcorders with integrated zoom lenses suck at. But I did find an exception: the Sony HDR-CX900/B (2K) and FDR-AX100/B (4K), both with awesome 1" sensors. Sample footage from the FDR-AX100/B is especially impressive.

So with those as my top choices I considered the pros and cons decided on...

...none of the above.
And here's why: Everything I ever take video of is moving, and moving quickly. Not only that, but the camera usually is moving quickly to keep up. And every single one of those cameras has a rolling shutter, which is something that in my mind I can't understand how the world has come to accept (kind of like Hulu ads that are longer than TV commercial breaks). Looking at the end of this FDR-AX100 test video, it goes from "wow that is the sharpest-looking cat video I have ever seen on the internet" to "okay this is actually broken," in my view. So my solution to the problem was to run away from the consumer market entirely and get a machine vision camera with a global shutter. Specifically, a Point Grey Grasshopper 3 (GS3-U3-23S6C-C):

It's actually a tiny thing!
The body is smaller than a GoPro, but that's a pretty meaningless metric as I will explain shortly. As a machine vision camera, there are no shortage of inexpensive CCTV and scientific lenses for it. And this particular one has a color Sony IMX174 1/1.2" CMOS sensor, which is supposed to be quite good. Most machine vision cameras with global shutter use a CCD, but this new Sony global-shutter CMOS is interesting and hopefully will appear in a Sony camcorder soon, at which point maybe I rejoin the normal world. (Probably not; I am a terrible consumer because I usually think I could build things better from scratch...) But for now, the only way to get this sensor is in an industrial block camera like this.

No that's not a DB9's a USB3.0 mini-B.
Bill Kerman makes an excellent imaging test subject...
Anyway, the downside of a machine vision camera is that it's missing some (most?) parts a camera normally comprises. You've got a sensor, an FPGA for image processing, and a USB3.0 port. The rest is left as an exercise to the user. In effect, you are tied to a computer for recording the video. It's worth mentioning that the idea of an external recorder is not uncommon in high-end video cameras, so this isn't that unusual. But I did sort-of go in the opposite direction as the highly-integrated camcorder that I wanted. For now, anyway.

The benefits make up for it, I think. For starters, it can shoot up to 162 frames per second at 1920 x 1200 resolution. This is in 8-bit raw mode, so the data rate is 1920 x 1200 x 162 Bytes/s = 373MB/s. (For speed calculation, I'm treating 1MB as 1,000,000 Bytes, not 1,048,576 Bytes.) Color processing of the raw Bayer filter sensor data can be done on either side of the USB3.0 transfer, but for maximum frame rate it is left to the host computer. As for what happens to the data: if it can be written to hard drive space fast enough, it is. If not, as is the case on my laptop, it can be buffered in RAM.

Hrm, time to get an SSD I guess...
The RAM buffer is pretty common in high-speed cameras, but it means your record time is limited. The rate at which raw video chews through RAM is impressive. A fast SSD might just be able to keep up with the USB3.0 data rate, if there are no other bottlenecks in the system. For now, though, I just stuck to short bursts at the not-quite-maximum frame rate of 120fps:

I was playing with different video modes: mostly raw grayscale images and color-processed H.264 compression. (The H.264 encoding seems to be faster than the HDD write speed at the moment.) But yeah, it's certainly capable of high-quality HD video at 4-5x slow motion. At lower resolutions, it can go to even higher frame rates, mostly limited by the USB3.0 data rate.

Did you notice the global shutter? Freeze any of the frames with a propeller in it and the prop is visible in its normal shape...not something ranging from a banana to a boomerang depending on the shutter speed. The shutter can also be set as low as 5μs, meaning you can stop just about anything short of a bullet with enough light. (Ping-pong balls are relatively easy to stop even at ~2ms shutter speed and normal warehouse lighting, as was demonstrated.) 

The shutter can be synchronized with an external digital signal. So I can do this, but without a strobe gun. (Side note: You can see the rolling shutter wreaking havoc on the strobe in that video, and this one too, creating bands of light and dark as part of the image is shuttered with the strobe on and part of it with the strobe off.) The shutter sync works both ways too; it can also output a digital signal that is synchronized to the shutter. I have plans for this feature as well...

One other interesting characteristic of this sensor is that it has quite good low-light performance. This is useful for high-speed video since it can make the most out of the photons it gets with a very fast shutter. But it's also interesting to play with on its own. For example, I can image Bill Kerman in almost pitch black:

It's not nearly as impressive as the Sony A7s low-light demo, but it does produce quite nice video even at night, using the on-board color processor to do some gamma correction. Here's some video taken with just the slightest hint of light left in the sky:

I have no ideas what kind of ISO it can achieve, and it's not a published specification for this camera. (If I had to take a ballpark guess I would say ISO 2000+ with acceptable noise levels? I have almost no feel for that metric, though, so I'll have to try metering it against something.) But it's good compared to any video camera I've owned. Probably not quite as good as a full-frame dSLR. But combined with the global shutter, I think it can do some very interesting night shooting.

So far so good...I have many planned uses for this thing. The next step, though, will be un-tethering it...

Sunday, April 13, 2014

NAB Show 2014

This year was my second trip to the National Association of Broadcasters exhibition (NAB Show) in Vegas as part of the Freefly Systems setup and pit crew. NAB is a huge (~93,000 attendees) expo for media technology, kind of like CES but for the Producers rather than the Consumers. In fact it's in the same venue as CES, the Las Vegas Convention Center.

Last year, the MōVI gimbal debuted and I think I explained the concept of active stabilization (one I can't seem to escape no matter where I end up) to a thousand different people in four days. This year, I didn't even have time to count the number of other handheld active stabilizers there were at the show. Certainly the gimbal trend has taken hold and there is no going back now.

The Birdy Cam!
I think the MōVI brand still holds the top end of the active stabilizer market (highest performance and, yes, highest price). I haven't used and won't ever use this blog for advertising - I'm not good at it anyway - but one big advantage Freefly had was a one-year head start during which some really spectacular footage was created by ever-more-skillful operators, and it's fun to see the results.

One of the event highlights was this interview with Tabb (Freefly president) and Garret Brown, inventor of the Steadicam. I felt like there should be some lighting bolts for dramatic effect given the "Steadicam-killer" hype surrounding handheld gimbals. But in actuality, the point made is an important technical one: active stabilizers control {pitch, roll, yaw}. There are still three degrees of freedom {x, y, z} that are at the mercy of the operator's movement, and Steadicam and its operators have perfected the smoothing of translation over the last 40 years.

Part of the fun of NAB is that I finally get to show off what I've been working on. My big project for this NAB was the EE/software for the new MōVI Controller, possibly the most hardcore-looking RC transmitter in existence:

My pet project, the blue OLED display. (The one on the unit itself, not the SmallHD monitor...) Anti-aliased font support, bitmaps, bar graphs, scrolling, string and numeric formatting, etc., all in a lightweight display driver written from scratch.
The station I somehow ended up manning at NAB: the new controller paired with the largest MōVI, the M15, and the Sony F55, rigged with wireless video and remote focus.
People trying out the new controller. Controlling framing and focus at the same time would take some practice, I think, and there is still the option of having a third operator control focus.
This was a hands-on demo; anyone could walk up and try it out. On one hand this was great, because it means I don't have to hold it the entire time (it was about 18-19lbs, total). But on the other hand, as my Maker Faire experience has informed me, it also means constantly watching the equipment, making sure it stays operational, changing batteries, and reminding people to share...all while trying to answer questions. And of course while most people are very respectful of the hardware, there are the expo trolls who go from booth to booth trying to break things.

In general, booth ops went much more smoothly this year, I think, due to a combination of better preparation and more manpower. There were a few other new toys to keep people engaged as well:

A Zero FX electric motorcycle with a Steadicam arm and M15 gimbal attached to the back. It seems I can't escape electric motorcycles no matter where I end up, either.
The Tero, a 1/5th-scale camera car, made a return as well. It's carrying an inverted M10 gimbal and Blackmagic 4K Production Camera.
Because our hardware was working well, and because we had enough people in the pits to handle the traffic, I actually got to wander around the show floor this year. There was a lot of camera porn, for sure. The Sony a7S was one of the big announcements, a small camera with supposedly epic low-light performance thanks to a full-frame 35mm sensor with just enough huge, gapless pixels for 4K video. On the other end of the size spektrum, AJA and Blackmagic also announced new, relatively inexpensive, 4K professional cameras.

The Blackmagic URSA. I can't get over how nice the machining is. On the other side is a 10" 1080p monitor.
I also went in search of the large active stabilizers - the ones that are mounted to full-size helicopters and camera cars for just about every aerial or car chase scene in a movie ever. There were three that I found at the show this year:

Filmotechnic, camera car specialists, with the Russian Arm and Flight Head active stabilizer (not sure exactly which one).
Shotover K1 full-size helicopter gimbal. This was about twice as large as I thought it was.
Cineflex, about the size I thought it was, but attached to a 27'-wingspan RC plane! Ryan Archer gogogogogo.
Cineflex ATV with one mounted in front and another in back.
A few other random sights of the show:

Crab drive (Or is it Swerve Drive? I can never remember the distinction.) FIRST robot camera dolly.
The circular equivalent of energy chain.
EditShare Lightworks, a powerful and relatively inexpensive video editing tool that I discovered at last year's NAB and have been using since.
Cutaway of a Canon lens...not sure how this even exists.
So yeah, these were just some of the things I saw at the show this year. It's definitely a bit of a circus, with a lot of money spent on impressive booths (and yes, sadly, in this day and age, booth babes are still a thing).

Booth cars I can understand.
Ignoring the flashy bullshit is hard, but underneath there is some cool tech on display and that's mostly what I like to see. Active stabilizers, wireless HD video, less and less expensive high-quality video cameras, more accessible software, etc., all make for an exciting media era - one for which I will happily hide on the engineering side.

Saturday, March 15, 2014

KSP: Mission to Laythe (and back).

Mission Summary:
Total M.E.T.: 984 days
Ships: 4 (2 to Laythe, 1 landed)
Crew: 12 (6 to Laythe, 3 landed)
Landed Mass: 56 tons
Furthest Distance from Kerbol: 82,000,000 km
Orbital Rendezvous: 4 (two in low Kerbin orbit, two in low Laythe orbit)
Quicksave Reloads: 4 (two game bugs, two landing retries)

After a nearly 1,000-day mission, I've successfully returned six Kerbals, including three that have landed on Laythe, to Kerbin. Laythe, the water moon of Jool (KSP's Jupiter analog), is probably the most interesting and challenging target for exploration in the Kerbol system. For that reason, I modified my Interplanetary Transport Ship (and proven Duna lander) to give it more range and more lifting power - just barely enough to pull off the trip...

...they hope.
A single ship would not be able to carry enough fuel to land on Laythe and return. In fact, I determined that even getting from the surface of Laythe back into orbit would require almost all the fuel the lander can carry. In order to have fuel for the journey itself, two ships would have to go. And each of those ships would need to be completely refueled in Kerbin orbit (by two more ships) before heading off. So, the mission as a whole required four ships (twelve crew) and four orbital rendezvous. Basically, lots of docking practice.

One of two Kerbin Orbit Rendezvous.
Docking and refueling in Kerbin orbit is somewhat routine, so I'll skip that part and focus on the bulk of the mission, which really has four parts: 1) Kerbin to Laythe, 2) Laythe Landing, 3) Laythe Ascent, and 4) Laythe to Kerbin.

1) Kerbin to Laythe

I've only successfully managed to even get to Laythe once before. Other attempts have been cut short by ship-eating game glitches such as the Deep Space Kraken. This time, I made sure to use quicksaves and also back-up persistent saves so I wouldn't be screwed if my ships randomly blow up in deep space.

Shit happens.
Game bugs aside, getting to Jool is not particularly challenging. According to the map, it takes about 1915m/s of Delta-V to get from low Kerbin orbit to Jool intercept. This is assuming an optimum Hohmann transfer orbit, which can be planned using this handy tool. The transfer orbit looks something like this:

Both of my fully-fueled mission ships managed the transfer with a Delta-V of about 1975m/s, almost perfect. (Part of the burn was done using the refueling ship to boost its docked fuelee into a more energetic Kerbin orbit. The fully-fueled ship would then detach and finish the transfer burn on its own in subsequent orbits.) Later in the transfer, some smaller course correction burns add a bit more to the total Delta-V required to get to Jool. But ideally, once a Jool encounter is achieved, very little additional fuel should be necessary to slow down, get captured by Jool, and ultimately, transfer into a Laythe orbit. Since both Jool and Laythe have atmosphere, this can mostly be done with well-targeted aerobraking.
Lucky Tylo slingshot. Remember kids: always aerobrake in the counter-clockwise direction.
I tried a couple different techniques for aerocapture and aerobraking: using Jool's atmosphere to slow down into a less energetic Jool orbit and using Laythe's atmosphere for a direct aerocapture into Laythe orbit. Which is better depends a lot on the direction of Laythe in its orbit relative to yours: if nearly parallel, less energy is required to aerocapture and it could be a good opportunity to do so. If you are approaching Laythe at a right angle, it might be better to wait for a better opportunity and aerobrake more at Jool instead. In either case, it's important to always approach Jool in the correct orbital direction (counter-clockwise, viewed from above) so that you're not setting yourself up for a head-on Laythe collision.

Ship #1, aerobraking away from Jool "set". 
Ship #2, aerobraking into Jool "rise".
The two ships took slightly different amounts of course correction and tweaking to get to Laythe. (In a few cases, an aerocapture was not completely successful and some fuel had to be burned to slow down to less than escape velocity.) But most of the fuel for this leg of the journey is just spent setting up the transfer. Here's a summary of the outbound trip for both ships:

Once the ships had both arrived in low Laythe orbit, it was time for the third rendezvous of the mission (the first between these two ships) in order to refuel the one that would become the lander. The second ship stays in orbit around Laythe and retains just enough fuel for the return trip to Kerbin.

First Laythe Orbit Rendezvous. 
2) Laythe Landing

I already outlined my landing strategy in this post, which includes my precision deorbit simulation tool. So here I can just review how it actually went.

Not well.
Actually, the deorbit tool worked perfectly - it was the ship design that was fundamentally flawed. With full fuel, the lander is severely top-heavy and can only tolerate at most a 10-15º incline. Lacking sophisticated tools such as ground slope radar and any spare fuel to make last-minute adjustments, the Kerbals really have no choice about where exactly they land, and even the flattest continents on Laythe are covered with fairly steep sand dunes.

But, it's a damn water wold and I managed to hit land three times in a row. In all three cases, the Kerbals survived the landing, but in the first two attempts, the ship fell over and was destroyed. Rather than assigning the Kerbals colonial status, I used up two quicksave retries and finally stuck the landing on the third try.

...just barely.
Other than the two tip-overs, landing went almost exactly as planned. The deorbit tool provided an almost pin-point landing site prediction, the parachutes (drogues first, then main chutes) worked fine and slowed the lander to about 21m/s terminal velocity. The last quick burst of power to slow the ship to safe touchdown speeds was done from the in-cockpit view, so that I could look at the radar altitude and really be efficient with when to start the quick burn. As a result, it used only 1.9 tons of fuel, about as good as my best practice runs on Kerbin. (I allotted for 2-3 tons to be used for deorbit and landing.) The remaining lander, still almost fully fueled, weighed in at about 56 tons. It truly is a behemoth of a landing craft, but that's what it takes to get three Kerbals back into Laythe orbit.

3) Laythe Ascent

The ascent from Laythe was the most uncertain portion of the trip. Laythe is 4/5ths the size of Kerbin, and has a surface gravity of 0.8g. Consequently, it's a bit easier than Kerbin to launch from the surface into orbit, but not by much. And it's impossible to practice this stage of the flight, other than to launch from Kerbin. (This vehicle cannot reach Kerbin orbit, or even come close, so that test isn't very conclusive.)

The original ship (ITS1), which successfully landed at and returned from Duna, had one central "Poodle" engine and four super-efficient but low-thrust LV-N's. The biggest modification to ITS2 was replacing two of the LV-N's with LVT-30's, to increase the lift-off thrust-to-mass ratio of the lander to about 1.2g (1.5x Laythe gravity). Without this modification, it would have had no hope of getting off the surface. The LVT-30's burn through their fuel quickly, though, and get staged after about 90 seconds, according to my ascent simulator:

Laythe ascent profile. (Stage definitions here.)
After the LVT-30's and their fuel tanks are gone, the ship's thrust-to-mass ratio dips sharply (still greater than 1x Laythe gravity, barely). The two LV-N's and single Poodle struggle along to get it faster and higher. Finally, the Poodle runs out of fuel (it doesn't get jettisoned, though) and the LV-N's finish off the ascent. By this time, the thrust-to-weight is well below one, but that's okay because the rocket is moving fast enough sideways that the ground  is falling away from it. (Or at least, that's how I think about it.)

Laythe ascent speed and altitude.
By the end of the ascent, 400 seconds after lift-off, the ship should be in a stable circular orbit, hopefully at or above 75km, with about two tons of fuel to spare. Or so the theory suggests. Time to try it out!

Returning to the ship after one last look at the terrain.
Nighttime launch. Stage 1: All five engines at nearly max power.
Stage 2: LVT-30's and their fuel tanks are jettisoned, LV-N's and Poodle still firing.
Stage 3: Poodle engine shutdown, LV-N's efficiently finishing off the circular orbit, into a beautiful sunrise.
And, back in orbit!
For once, something worked exactly as planned on the first try. Well, not exactly: the actual final orbit was at about 120km instead of 75km. But I can't complain about extra energy. The fuel remaining was also right on target or slightly higher than two tons. Rather than burning some of it to get back to 75km where the orbiting sister ship was parked, it made more sense to bring the sister ship up to 120km. (If you're planning to leave orbit anyway, it always makes sense to burn pro-grade and increase the overall orbital energy of the lower ship for docking, rather than burning retrograde and decreasing the energy of the higher ship.) The fourth and final orbital rendezvous went without a hitch:

The two ships meet again. Val also making an appearance.
4) Kerbin Return

Here I deviate from my original flight plan a little, because I thought of an even more fuel-efficient way to get back. The original plan was to transfer about 50% of the remaining fuel to each ship, jettison the empty tanks and LVT-30's on the ship that still had them, and have both ship return independently on LV-N's. But this method would still involve carrying a lot of dead weight in the form of mostly-empty fuel tanks and Poodle engines that probably wouldn't get used again. So I came up with a better return configuration:

Fairwell, empty ship!
I transferred all of the remaining fuel into one ship, the lander, which had already jettisoned its LVT-30's during its ascent from Laythe. Then, I cut loose the entire second ship, the orbiter, except for the command pod with its three Kerbals. This newly-lightened single return ship would have plenty of Delta-V, much more than two ships on their own would have had. As an added bonus, I only had to keep track of one ship during the trip back. The down-side is I have left some space junk in Laythe orbit.

Never mind that, though, time to return home. I've done a Laythe direct return before, where I burn out of Laythe orbit directly onto a Jool-Kerbin transfer orbit. It's by far the most efficient way out. I did the math at some point in the past:

You can calculate the launch window and required angle of departure using this calculator. Just put in Laythe's orbital radius as the "parking orbit" for Jool. The time to leave Laythe is when it is at the position in its orbit around Jool that the departure angle represents. Sort-of a hack way to calculate it, but it works. Then all you have to do is exit Laythe's sphere of influence parallel to its orbit, with enough Delta-V to get onto a Jool-Kerbin transfer orbit. It should look like this:

Exiting Laythe SOI parallel to its orbit.
And with the correct Delta-V to get onto a Jool-Kerbin transfer orbit.
And it should take somewhere between 1,000m/s and 1,200m/s of Delta-V, which is remarkably efficient compared to the trip from Kerbin to Jool. It's by far more efficient than diving towards Jool and then doing an escape burn. (Although it wasn't intuitively obvious to me that that was the case - I had to do the math to prove it. Sometimes diving to a lower Pe and then doing a burn from there can be more efficient, I think.) 

Anyway, I royally screwed up the departure by trying to break up the burn into two parts.

Burn #1, to a Laythe Ap of about 2,000km.
Very nice scenic route out, but I forgot how quickly Laythe orbits Jool.
Burn now Laythe is way off the intended departure angle.
I didn't want to do one long (~8 minute) burn, so I split it into two with one final elliptical orbit of Laythe in between. This could work well, if I had actually planned it correctly. I routinely do this from Kerbin since the required transfer burns are huge and the ships are usually fully-fueled and heavy when they leave. But I forgot that Laythe orbits Jool much faster than Kerbin orbits the sun, so in just a single large orbit of Laythe, it has moved quite far in its orbit of Jool. Enough to severely mess up my departure angle. As a result, I used much more fuel than needed and my transfer orbit came out like this:

Far from parallel to Laythe's orbit = very inefficient.
And I have first thrown myself into deeper space...
So it was a good thing that I had plenty more fuel than anticipated for this leg of the journey. Lesson learned for future trips as well: plan in the extra time for the first burn or just do the entire Laythe departure burn in one go. One unintended benefit of this mistake was that the Kerbals got to see Eeloo, the elusive (oh, I get it now!) Pluto-like outer planet of the Kerbol system:

It's there, I swear. (Click to zoom.)
Anyway, more than a year later, and with a few course corrections along the way to fix my messed-up transfer orbit and get on the right orbital inclination, the Kerbals get their first glimpse of home:

The entire return trip took around 2,000m/s of Delta-V, almost twice what it should take and what it took last time I returned from Laythe. If I had tried the return method using two independent ships, they might not have made it back. As it was, with the single return ship, there was still enough fuel to pull it off. Here's a summary of the return trip Delta-V budget, an extreme worst-case scenario given how badly I messed up the departure angle:

The ship needed to slow down a lot at Kerbin, and it didn't have very much fuel to spare for an insufficient aerocapture, so I made sure to dive pretty deep into the atmosphere, targeting an after-aerocapture apoapsis of about 500km. Usually, my aerobraking spreadsheet tends to overestimate the Delta-V for hyperbolic orbits / aerocapture, so I was expecting to come out quite a bit higher. I had never tested aerobraking with docked ships, so there was also the non-zero chance of complete destruction. But actually, it worked quite well. It even makes sense physically, leading with the heat shield of the docked command pod. KSP doesn't model heat damage at the moment, so it doesn't matter much, but it does look cool:

And the aerocapture worked well, ending with an apoapsis of about 1,000km (less braking Delta-V than prediced, but more than sufficient for capture). Further light aerobraking passes lowered the orbit of the combined ship to about 100km.

The final step was to deorbit the command pods. I chose to do this one at a time. For one, I still wasn't very confident on the aerodynamic stability of the two-pod system. Doing one deorbit burn and then separating them was also out of the question: having two ships in the atmosphere at the same time usually leads to one disappearing forever. I had enough fuel left to deorbit one and then quickly turn around and put the remaining ship back into a parking orbit.

Step 1: Deorbit burn at the target crater.
Step 2: Undock.
Step 3: Turn around and quickly get back into orbit.
 By now I have almost perfected Kerbin deorbit using my precision deorbit calculator and can pretty accurately target a landing just off the coastal location of the Kerbal Space Center.

For the second deorbit, I decided to ride it out inside:

And after almost 1,000 game days and probably around 200 million kilometers of distance traveled, the two ships both landed within a hundred km or so of where they started:

The second splashed down probably 3km from the KSC...
So ends a mostly-successful mission to Laythe surface and back using a ship that wasn't really designed for the task. The technical parts of the mission, precision deorbits to hit what little land there is at Laythe and a first-attempt ascent, both went quite flawlessly based on the two simulators I made to assist. Transfers and aerobraking also went mostly well. The only real failure was the lander design itself - it's just too top-heavy to land on hilly terrain.

Overall, while it is immensely fun to plan and execute a mission with such little margins for error in terms of fuel, it's also time-consuming and tedious sometimes. Additionally, even if the lander could tolerate steeper inclines, this method of Laythe landing restricts landing sites to a location somewhere on the equator within roughly a 5º margin of error from the deorbit burn. There's no opportunity for exploring the surface, or even landing two or more ships in the same place. Every bit of fuel is needed for the ascent, so powered or rocket-guided descents are out of the question.

So a new, more flexible transport system between Kerbin and Laythe will be required for a sustained presence there. Larger transport ships and permanent refueling stations around Kerbin and Laythe are part of this. But most importantly, a new method of getting down to the surface and back up will be required. And towards that goal, I'll end with this teaser: