iPhoto cannot export – OSStatus error 1856

Buy a Mac they said. They just work…

And of course the one time you need to make a slideshow or export an iMovie nothing works. I searched the web high and low for this solution. If you are having trouble exporting from iPhoto or iMovie try this:

  1. Restart your Mac into safe mode by holding shift while it boots.
  2. Restart back into normal mode.
  3. Export from iPhoto/iMovie.

RuboCop and the 80 character line length limit is absurd

Having a line length limit is absurd. Just change your wrap setting in vim! IMO this gives you best of both worlds you can have everything on the screen when you need it or off when you don’t. Also, it’s easier to read indentation and syntax on one liner code snippets. When the code flows off the screen it can hide a lot of grit and make it easier to get a general lay of the land (ie. control logic) and then turn wrap on and hack.

Every time you adjust the select query you have to fiddle with the line length because OMG, it might overflow 80 and break your mac.

I myself think any line length limit is absurd, just use common sense. If it is clearer to break, then break it, otherwise don’t. But breaking it for the sake of an arbitrary length limit?

You’re inevitably going to use a line limit (You might even agree with it), so put this in your .vimrc, it highlights the 100th character. (Yeah I use a 100 character line length limit). I think Github displays 120 characters so that might actually make the most sense.

autocmd Filetype ruby highlight ColorColumn ctermbg=red
autocmd Filetype ruby call matchadd('ColorColumn', '\%101v', 120)

Automating WordPress updates with cron and the wp-cli

Ain’t nobody got time to worry about updates. I’d rather have it break while updating than have it hacked by a script kiddie because I’ve neglected to login for a while. We will see if this ever breaks anything.

Install WordPress-CLI if you have not already.

Add all your sites to the update_wp.sh script:

declare -a arr=(

for i in "${arr[@]}"
  echo $i
  cd /usr/share/nginx/html/$i
  sudo -u nobody wp core update

NOTE: Substitute /usr/share/nginx/html for the directory that holds your WordPress sites. Change user nobody to the user that owns the aforementioned directories. Check with ls -al, it might be the user that apache/nginx runs as, check that with top, htop, or ps -aux

Make sure it’s executable:

chmod +x update_wp.sh

Run the script daily and live dangerously!

30 2 * * * bash /home/anthony/update_wp.sh >/dev/null 2>&1

Elastic search order by date and indexing with Ruby-on-Rails

The elastic search date field type seems like a lot of work. You have to manually map each date field parameter. All I really wanted to do was order by date, and this doesn’t necessarily rule out more complicated date range filtering. You just have to convert everything to Epoch millisecond time. Sometimes it just seems like the most standard format.

Map the field type in your elastic search index as a float. Changing a field type requires recreating the index.

mappings dynamic: 'false' do
 indexes :created_at, type: 'float'

In your method that converts the model to json for indexing, convert the date to milliseconds.

def as_indexed_json(options={})
  model_attrs = {
    :created_at => self.created_at.to_f

Now you can easily order by asc and desc.

Linux schedule a one off reboot or task

So the server requires a reboot because you updated the kernel but clients are using it and you don’t work at 1:00am? Not a problem…

echo "/sbin/shutdown -r now" |at 01:00 tomorrow

You can also the check the queued jobs with:


job 15 at Wed May  6 04:00:00 2015

And you can cancel the job like so:

at -r jobid

Or you can delete all jobs with:

atrm $(atq | cut -f1)

And as always RTFM

Capistrano hangs on assets precompile and never finishes the deploy

This only started happening after I killed a deploy that was half completed. It could have been during the asset precompile process, I don’t remember. It’s weird though because the symptoms seem more like an ssh session issue and less like residual files from my cancelled deploy that is causing the issue.

I added keep alive options to ssh:

:ssh_options => {
  :keepalive => true,
  :keepalive_interval => 30

I also confirmed that tmp/cache was in my linked dirs to save some precompile time. It was suggested in the repo issues thread.

set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets public/system}

Linux Backlight for CM Storm Devastator Keyboard

The CM Storm keyboard was dirt cheap and looks really cool. The only problem is that on Fedora 21 and other Linux distributions the Scroll Lock key is not enabled by default. And it just happens to be the key that toggles the keyboard backlight. I don’t know why they didn’t just put a switch on the keyboard.

For Fedora 21:

Install A keyboard binding tool

yum install xdotool

Add an easy alias to toggle the backlight from the command line with the ‘k’ key

vim ~/.bashrc

alias k="xmodmap -e 'add mod3 = Scroll_Lock' && xdotool key --delay 10 'Scroll_Lock'"

Even better than that, lets trigger the toggle command on login. This part is Gnome specific and wont work on Ubuntu (Unless you removed Unity)!

vim ~/.bash_profile

dbus-monitor --session "type='signal',interface='org.gnome.ScreenSaver'" | ( while true; do read X; if echo $X | grep "boolean true" &> /dev/null;then :; elif echo $X | grep "boolean false" &> /dev/null; then sh /home/penner/scripts/keyboard.sh; fi done ) &

vim /home/penner/scripts


xmodmap -e "add mod3 = Scroll_Lock"
xdotool key --delay 10000 "Scroll_Lock"

You need to adjust the script paths for yourself. Might want to tweak the time outs as well. Good luck.

USDA Nutrient Data SR23 POSTGRES SQL dump

I found the mysql dump online but these days I prefer Postgres. For others who are in the same boat as me, I thought I would save you the troubles! Enjoy.

USDA Nutrient Data (SR23) Postgres Dump

I used py-mysql2pgsql and renamed all the tables to lower case.

I plan to hook this data into elastic search so that I can search on it with a rails api and return JSON. Maybe I’ll open source the elastic search rails API I’m going to build.


I noticed that this data set is missing the food categories, you can seed these with this:

# db/seeds.rb

groups = [
  ["0100", "Dairy and Egg Products"],
  ["0200", "Spices and Herbs"],
  ["0300", "Baby Foods"],
  ["0400", "Fats and Oils"],
  ["0500", "Poultry Products"],
  ["0600", "Soups, Sauces, and Gravies"],
  ["0700", "Sausages and Luncheon Meats"],
  ["0800", "Breakfast Cereals"],
  ["0900", "Fruits and Fruit Juices"],
  ["1000", "Pork Products"],
  ["1100", "Vegetables and Vegetable Products"],
  ["1200", "Nut and Seed Products"],
  ["1300", "Beef Products"],
  ["1400", "Beverages"],
  ["1500", "Finfish and Shellfish Products"],
  ["1600", "Legumes and Legume Products"],
  ["1700", "Lamb, Veal, and Game Products"],
  ["1800", "Baked Products"],
  ["1900", "Sweets"],
  ["2000", "Cereal Grains and Pasta"],
  ["2100", "Fast Foods"],
  ["2200", "Meals, Entrees, and Side Dishes"],
  ["2500", "Snacks"],
  ["3500", "American Indian/Alaska Native Foods"],
  ["3600", "Restaurant Foods"]

groups.each do |g|
  FoodGroup.first_or_create(FdGrp_Cd: g[0], FdGrp_Desc: g[1])

Additionally, if you’d like some rails models for your api, this might be a good start:

# app/models/food.rb
class Food < ActiveRecord::Base
  self.table_name = "food_des"
  self.primary_key = "NDB_No"

  has_many :measures, :foreign_key => "NDB_No"
  has_one :food_group, primary_key: "FdGrp_Cd", :foreign_key =>  "FdGrp_Cd"

# app/models/food_group.rb
class FoodGroup < ActiveRecord::Base
  self.table_name = "fd_group"

# app/models/measure.rb
class Measure < ActiveRecord::Base
  self.table_name = "weight"

  belongs_to :food, primary_key: "NDB_No", :foreign_key =>  "NDB_No"

Rails 4.2 and Elastic Search – Querying and Filtering Across Multiple Nested Models

At work recently I have been working on a new filtering and search page for ourselves. The requirements were searching and filtering said search results. Things get complicated and performance tends to suffer when dealing with complicated and far reaching relationships. You end joining every table in the database and even then you are only filtering and not actually searching. Sometimes denormalized data just makes sense. It’s still a WIP but here it as anyway, hope this snippet helps you get rolling with your project.

def self.search(query="", options={})

   params = {
      query: {
        filtered: {
          query: {
            multi_match: {
              query: query,
              fields: ['title^10', 'overview']
            match_all: {}
          filter: {
            bool: {
              must: [
                  nested: {
                    path: 'regions',
                    filter: {
                      bool: {
                        must: [
                            terms: { 'regions.id' => options[:regions] }
                  nested: {
                    path: 'genres',
                    filter: {
                      bool: {
                        must: [
                            terms: { 'genres.id' => options[:genres] }
      sort: [
          options[:col].try(:downcase) => {
            order: options[:direction].try(:downcase) #, ignore_unmapped: true

    params[:query][:filtered][:query].delete(:match_all) if query.present?
    params[:query][:filtered][:query].delete(:multi_match) if query.blank?
    params.delete(:sort) if options[:col].blank? || options[:direction].blank?



Angular JS persisting data across controller instances

Where do we want to store our data in Angular JS and how does it flow through your web/mobile app. This is especially important if you are using Angular JS in the mobile context. Recently I’ve been working on some Ionic Framework apps. Here is my example.

The user leaves the page then navigates back. We are duplicating API calls and losing the users context within the data. Behind the scenes we did this:

  1. Initialize controller A
  2. Load data set A into controller A via API call
  3. Load page B
  4. Initialize controller A
  5. Load data set A into controller A via API call

Alternatively we could have done this:

  1. Initialize controller A
  2. Load data set A into service A via API call
  3. Pass data from service A to Controller A via a promise
  4. Load page B
  5. Initialize controller A
  6. Load cached data set A from service A

An example of a controller and service pairing might look like this (NOTE: This is specific to my Ionic Cordova mobile app, but you can get the idea. Also note that the more function returns a promise):

.controller('HotCtrl', function($scope, UtilService, HotService) {
 $scope.util = UtilService;
 $scope.movies = HotService.all();

 $scope.doRefresh = function() {
   $scope.movies = [];
   setTimeout(function() {
   }, 700);

 $scope.moreMovies = function() {
     $scope.movies = HotService.all();
     setTimeout(function() {
     }, 200);
     $scope.hasMore = HotService.hasMore();

.factory('HotService', ['$resource', 'MovieService', 'UtilService', function($resource, MovieService, UtilService) {
 var page = 1;
 var movies = [];
 var regions = UtilService.getRegions();
 var hasMore = true;
 return hotObj = {
   all: function() {
     if(regions !== UtilService.getRegions()) {
       regions = UtilService.getRegions();
     return movies;
   more: function() {
     options = {page: page, n: 'hot', per_page: 500};
     return MovieService.query(UtilService.getParams(options), null, function(response, headers){
       if(response.length < 1) {
         hasMore = false; 
       } else {
       hasMore = true;
     angular.forEach(response, function(value, key) {
       function(response) {
   get: function(index) {
     return movies[index];
   hasMore: function(){
     return hasMore;
   clear: function() {
     page = 1;
     movies = [];
     hasMore = true;