Kevin Ludlow is a 45-year-old accomplished software developer, business manager, writer, musician, photographer, world traveler, and serial entrepreneur from Austin, Texas. He is also a former candidate for the Texas House of Representatives.
Please take a moment to view his complete resume for more information.
Note: the entirety of this website was architected and developed from the ground up exclusively by Kevin Ludlow.
What an exciting Monday morning I had (12/12/2016). A U-Haul right in front of me on I-35 smashed into some cars and fled the scene. I called 911 and immediately began chasing the U-Haul.
top left: The U-Haul just after the crash; top-right: police on the scene; bottom-left: police arresting the suspects; bottom-right: selfie with the arresting officer
After weaving in and out of traffic, he exited MLK and headed eastbound. He began running red lights, excessively speeding, and clearly trying to get away. Fortunately I know Austin pretty well. I followed him south on Chestnut, east on 12th, northeast on Webberville, south on Tannehill, and finally west on Jackie Robinson. Having destroyed his side mirror in the crash, I purposefully kept my truck to his left so he couldn't see me (hard to see in a U-Haul anyways). By coincidence, he and his passenger pulled over on Jackie Robinson just west of Axel Lane, half a block from property I own. They ditched the U-Haul.
I kept a distance of about 100 yards not knowing what they might do. They got out of the car and started approaching me. I started to reverse my vehicle all the while reporting the status to dispatch. I finally told the police to approach southbound on Tannehill so they would intercept them on Jackie Robinson. They did. Police showed up guns drawn. The pair (seemingly a couple) were arrested on the spot. I provided a positive identification of the driver and gave a statement of the entire pursuit.
A map detailing the route of the pursuit.
When it was all over, the arresting officer said he knew me from when I ran for office in 2014 (Kevin Ludlow for TX House 46 ). I asked how. It turns out he was the officer who had broken into my house in 2014 and tried to arrest me citing the "smell of marijuana." I couldn't believe it and just laughed. We shook hands and took a selfie. It was nice to make peace with him. He appreciated what I had done and said they almost never catch hit and runs. Everyone in the accident was okay.
So this is Prashant, the guy whose car was smashed Monday morning. He found me on Facebook and invited me out to lunch. Turns out he's a software developer too and works right near me. He invited me to join him and his wife for an authentic home cooked Indian meal sometime at his house and that he would invite me to Holi in a few months. Such a positive feeling being able to help someone. I feel great that he's so happy about it all.
About the best ending we could hope for after meeting Prashant, the victim in the hit-and-run crash
When I first sat out to design and build kevinludlow.com, I did so under the impression that it wouldn't be long before every single thing that we had was digitally catalogued. I did so under the impression that it would become increasingly more difficult to protect oneself from having information digitally catalogued.
I'm not referring to police records and tax documents. I'm referring to that picture of you where you've had six too many to drink, your eyes are bloodshot, and there's still some puke running down your lip from when you last vomited. These are moments that we try dearly to make disappear, but they're not going to disappear. The second an iPhone or Android snaps a picture like that, not only is the content saved in an archive forever (yes, even if you delete it), but the META-data attached to the photo is frightening. There is a full story of exactly where you were, what you were doing, how long you were there for, and everything else in between included in that META-data.
Along with that has come hundreds of hours of video footage that I've recorded over my lifetime. FINALLY, I have finished the segment of my site whereby I can archive all of my video footage not only on my own personal website, but on YouTube.
Google, like with most of their tools, has created a wonderful API for connecting with your YouTube channel and uploading videos to it. I've taken full advantage of this and in the past month or so have uploaded about 1,500 videos to their service as part of the initial experiment. Now that I finally have the tools to do it, I probably have 200-300 more hours of video that I will fully digitize and send to YouTube.
My process is simple. I start by placing all of my media content in the regular directory it falls on within my server. Think of it like a giant iPhoto gallery. It's just an enormous directory tree with about 90,000 unique memories in it (as of this writing). From there my servers run a series of automated processes that I've developed over the years. Photos are converted into different sizes and resolutions as needed by my software package. Videos are converted into OGG format (open-source video for the web) so that I can display them using HTML5 on this site. Thumbnails from the videos are automatically extracted and organized. Each entity that is created within my system is given a unique ID and catalogued within my database. There is a lookup table that will cross reference any of these unique IDs in the entire system with a specific file type (photo, video, etc.)
Focusing solely on the videos, at this point in my process there exist videos on my server that have thumbnails, that have been converted to a useful (and view-able) web format, but that lack any titles or description. I quickly scroll over the video, provide it with a title and a description, and so long as I'm okay archiving it within YouTube, I check a box and it gets marked for upload.
A standard CRON job on my server is routinely scanning the database for videos that have been approved, but not yet uploaded. Once it finds such a video, the YouTube API is invoked and the video gets uploaded to YouTube. All of the information about the video that has been populated in my database (such as the title and description) is sent over with the video for YouTube to archive. The date/time that the video goes over is logged in my database as is the unique video code that YouTube provides. From there I can reference any video in my personal catalogue by looking up the YouTube ID myself.
While I've not yet configured the videos as a YouTube plugin on the site, this is the obvious next step. Not only does it take cost off of my server, but YouTube's servers are likely significantly faster than mine.
That's the gist of how it works. I've wanted to do this for about 7 years now. I'm delighted to have finally finished the coding that makes it work.
Incidentally, if you go to my YouTube channel, you find thousands of videos, all of which originated on this site.
This song of ours called "Lullaby" has become one of my very favorite songs that we've written.
The song was initially written by Sasha and in fact his cello-driven band even plays a purely instrumental version of it. But Sasha brought it to Carly and I and the three of us started working out more complex bits of it that fit our particular style.
It's incredibly enjoyable to be playing the song and get about 2.5 minutes in when the bass and drums finally enter. I wind up playing a pretty simple interlude for about 30 seconds and then we break into a guitar solo. It's perhaps not the most complex guitar solo I've ever written, but for various reasons remains one of my favorite that I've ever written.
As a little bit of trivia to the song, you can hear and see Carly pretend to be angry at the 4:29 mark. She was trying to come in at a specific time over and over again in rehearsal and kept missing it. She actually nails her mark in this performance, but thought that she was off and so is looking to Sasha as acknowledgement that she messed up. But she didn't. She did it perfectly, as Carly has tended to do.
It's not very often that I take the time to make technical posts for all of the complexities that go into my personal website, but this one was so frustrating and time-consuming that I figured I would share my findings with the world. I've also been having to do a lot of ImageMagick work in my professional life lately and so I fully understand the frustration of not having good documentation on some of this.
The problem:
I have a bunch of .CR2 files in my photo gallery (RAW image files shot on a Cannon). I am using ImageMagick for a bunch of different processing components including resizing the images for my gallery. Unfortunately ImageMagick was failing due to UFRAW-batch not being found. The error would look something like this:
After doing a ton of research and trying to hunt down ways to make ufraw-batch work with ImageMagick, I finally went down a different path of deciding to configure DCRAW as my RAW-file ImageMagick delegate. This method wound up working perfectly, but it does require some special configuring. I've detailed that process below.
Assumptions:
The following installation was done on an Amazon AMI instance (essentially a CentOS machine), with ImageMagick 6.7.8-9 2015-10-08 Q16 installed. It also assumes the user has ROOT access and that all of the steps are performed as ROOT.
Installing DCRAW:
If you haven't already, sudo to root:
su root
Create a directory to work with somewhere in your home:
At this point you may find that you need to install the liblcms dependency. If this has happened then the installation of your RPM package will have failed. If DCRAW installed without any problems then skip to the libJpeg section below. If there was a dependency problem, take a look at the following:
Please note that there could certainly be other dependencies missing as well. This was simply a problem that I ran into with a pretty typical modern Linux installation.
Installing libJpeg (for cjpeg):
At this point DCRAW should be installed on your system. The best you'll really be able to do is to convert to either a TIFF file or get a raw stream of JPG. The problem with the RAW stream (assuming you're wanting to export to a jpeg as most people will want to do) is that it doesn't help you to actually compress the JPG stream into a usable file. If you try to simply pipe the STDOUT to a file (xxx.jpg for example), the image won't load as it won't be properly compressed for jpeg specs. You'll need to use cjpeg (or a similar tool) to accomplish this final piece.
gunzip libjpeg-turbo-1.4.2.tar.gz tar -xvf libjpeg-turbo-1.4.2.tar
Go into the newly extracted directory and install libjpeg:
cd libjpeg-turbo-1.4.2 ./configure make make install
Testing cjpeg:
At this point you should have DCRAW, any necessary dependencies, and cjpeg installed on your system. Incidentally, cjpeg will likely have been installed to /opt/libjpeg-turbo/bin/cjpeg. If you can't find it, do the following:
Update the mlocate database and find the jpeg tool:
updatedb locate cjpeg
From here you'll have the fully qualified path of where the cjpeg executable lives. Mine was installed to /opt/libjpeg-turbo/bin/cjpeg (which I assume is a standard location for this program).
Configuring ImageMagick Delegates:
So the final part of this process is to configure the RAW delegate for ImageMagick so that instead of trying to use ufraw-batch it instead uses your newly installed dcraw. Be sure to backup your delegates.xml file before making any changes to the file! You can break your installation of ImageMagick if you're not careful!
Find your ImageMagick delegates file
locate delegates.xml
Be sure to backup the delegates file first!
cp /etc/ImageMagick/delegates.xml ~/
Open the file with your favorite editor:
vim /etc/ImageMagick/delegates.xml
Search the document for ufraw-batch
Replace the ImageMagick Delegates:
That untouched line in the delegates.xml file should look something like this:
You'll want to change it to look like this (using your path to cjpeg):