Flashing TinkerOS onto your sd card from MacOS

The main landing page for the Tinker Board only provides instructions for Window and Linux. And at this point the Linux zip they provide for you to download is corrupt or just refuses to unzip on MacOS. Anyways, heres the steps I took to get my card flashed and ready to go.

  1. Download TinkerOS 1.4 (or latest) from here
  2. Unzip:
    unzip 20170223-tinker-board-linaro-jessie-alip-v14.zip
  3. Insert micro sd card into macbook via adapter
  4. Open disk utility and ensure sd card is formatted as FAT 32
  5. Take note of the device, in my case disk2s1, but we just need to know that it’s disk2
  6. Unmount the card via disk utility
  7. Image card:
    sudo dd if=/Users/anthony/Desktop/20170223-tinker-board-linaro-jessie-alip-v14.img of=/dev/rdisk2 bs=1m
  8. Took about 70 seconds to compete the dd command
  9. Plug into Tinker Board and tinker away

BackWPup s3 bucket does not exist after selecting it

This error is not very clear since BackWPup let’s you select your desired bucket from the list. If the bucket is already selected, how can it not exist? Anyways, I fixed this by switching the region from US-West-Oregon to US-Standard.

[INFO] BackWPup 3.3.6; A project of Inpsyde GmbH
[INFO] WordPress 4.7.2 on http://example.com/
[INFO] Log Level: Normal 
[INFO] BackWPup job: Weekly Backup
[INFO] Logfile is: backwpup_log_411491_2017-03-01_03-57-39.html
[INFO] Backup file is: backwpup_411491_2017-03-01_03-57-39.tar.bz2
[01-Mar-2017 03:57:39] 1. Try to backup database …
[01-Mar-2017 03:57:39] Connected to database example on localhost
[01-Mar-2017 03:57:39] Added database dump "example.sql.gz" with 290.23 KB to backup file list
[01-Mar-2017 03:57:39] Database backup done!
[01-Mar-2017 03:57:39] 1. Trying to make a list of folders to back up …
[01-Mar-2017 03:57:40] Added "wp-config.php" to backup file list
[01-Mar-2017 03:57:40] 1555 folders to backup.
[01-Mar-2017 03:57:40] 1. Trying to generate a file with installed plugin names …
[01-Mar-2017 03:57:40] Added plugin list file "Example.pluginlist.2017-03-01.txt.bz2" with 1.09 KB to backup file list.
[01-Mar-2017 03:57:40] 1. Trying to generate a manifest file …
[01-Mar-2017 03:57:40] Added manifest.json file with 5.99 KB to backup file list.
[01-Mar-2017 03:57:40] 1. Trying to create backup archive …
[01-Mar-2017 03:57:40] Compressing files as TarBz2. Please be patient, this may take a moment.
[01-Mar-2017 03:58:25] Backup archive created.
[01-Mar-2017 03:58:25] Archive size is 75.79 MB.
[01-Mar-2017 03:58:25] 10163 Files with 159.58 MB in Archive.
[01-Mar-2017 03:58:26] 1. Trying to send backup file to S3 Service …
[01-Mar-2017 03:58:26] ERROR: S3 Bucket "example-backup" does not exist!
[01-Mar-2017 03:58:26] ERROR: Job has ended with errors in 47 seconds. You must resolve the errors for correct execution.

iPhoto cannot export – OSStatus error 1856

Buy a Mac they said. They just work…

And of course the one time you need to make a slideshow or export an iMovie nothing works. I searched the web high and low for this solution. If you are having trouble exporting from iPhoto or iMovie try this:

  1. Restart your Mac into safe mode by holding shift while it boots.
  2. Restart back into normal mode.
  3. Export from iPhoto/iMovie.

RuboCop and the 80 character line length limit is absurd

Having a line length limit is absurd. Just change your wrap setting in vim! IMO this gives you best of both worlds you can have everything on the screen when you need it or off when you don’t. Also, it’s easier to read indentation and syntax on one liner code snippets. When the code flows off the screen it can hide a lot of grit and make it easier to get a general lay of the land (ie. control logic) and then turn wrap on and hack.

Every time you adjust the select query you have to fiddle with the line length because OMG, it might overflow 80 and break your mac.

I myself think any line length limit is absurd, just use common sense. If it is clearer to break, then break it, otherwise don’t. But breaking it for the sake of an arbitrary length limit?

You’re inevitably going to use a line limit (You might even agree with it), so put this in your .vimrc, it highlights the 100th character. (Yeah I use a 100 character line length limit). I think Github displays 120 characters so that might actually make the most sense.

autocmd Filetype ruby highlight ColorColumn ctermbg=red
autocmd Filetype ruby call matchadd('ColorColumn', '\%101v', 120)

Automating WordPress updates with cron and the wp-cli

Ain’t nobody got time to worry about updates. I’d rather have it break while updating than have it hacked by a script kiddie because I’ve neglected to login for a while. We will see if this ever breaks anything.

Install WordPress-CLI if you have not already.

Add all your sites to the update_wp.sh script:

#!/bin/bash
declare -a arr=(
  "anthonypenner.com" 
  "trekcamp.org"
  "etc"
)

for i in "${arr[@]}"
do
  echo $i
  cd /usr/share/nginx/html/$i
  sudo -u nobody wp core update
done

NOTE: Substitute /usr/share/nginx/html for the directory that holds your WordPress sites. Change user nobody to the user that owns the aforementioned directories. Check with ls -al, it might be the user that apache/nginx runs as, check that with top, htop, or ps -aux

Make sure it’s executable:

chmod +x update_wp.sh

Run the script daily and live dangerously!

30 2 * * * bash /home/anthony/update_wp.sh >/dev/null 2>&1

Elastic search order by date and indexing with Ruby-on-Rails

The elastic search date field type seems like a lot of work. You have to manually map each date field parameter. All I really wanted to do was order by date, and this doesn’t necessarily rule out more complicated date range filtering. You just have to convert everything to Epoch millisecond time. Sometimes it just seems like the most standard format.

Map the field type in your elastic search index as a float. Changing a field type requires recreating the index.

mappings dynamic: 'false' do
 indexes :created_at, type: 'float'
end

In your method that converts the model to json for indexing, convert the date to milliseconds.

def as_indexed_json(options={})
  model_attrs = {
    :created_at => self.created_at.to_f
  }
end

Now you can easily order by asc and desc.

Linux schedule a one off reboot or task

So the server requires a reboot because you updated the kernel but clients are using it and you don’t work at 1:00am? Not a problem…

echo "/sbin/shutdown -r now" |at 01:00 tomorrow

You can also the check the queued jobs with:

atq

job 15 at Wed May  6 04:00:00 2015

And you can cancel the job like so:

at -r jobid

Or you can delete all jobs with:

atrm $(atq | cut -f1)

And as always RTFM

Capistrano hangs on assets precompile and never finishes the deploy

This only started happening after I killed a deploy that was half completed. It could have been during the asset precompile process, I don’t remember. It’s weird though because the symptoms seem more like an ssh session issue and less like residual files from my cancelled deploy that is causing the issue.

I added keep alive options to ssh:

:ssh_options => {
  :keepalive => true,
  :keepalive_interval => 30
}

I also confirmed that tmp/cache was in my linked dirs to save some precompile time. It was suggested in the repo issues thread.

set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets public/system}

Linux Backlight for CM Storm Devastator Keyboard

The CM Storm keyboard was dirt cheap and looks really cool. The only problem is that on Fedora 21 and other Linux distributions the Scroll Lock key is not enabled by default. And it just happens to be the key that toggles the keyboard backlight. I don’t know why they didn’t just put a switch on the keyboard.

For Fedora 21:

Install A keyboard binding tool

yum install xdotool

Add an easy alias to toggle the backlight from the command line with the ‘k’ key

vim ~/.bashrc

alias k="xmodmap -e 'add mod3 = Scroll_Lock' && xdotool key --delay 10 'Scroll_Lock'"

Even better than that, lets trigger the toggle command on login. This part is Gnome specific and wont work on Ubuntu (Unless you removed Unity)!

vim ~/.bash_profile

dbus-monitor --session "type='signal',interface='org.gnome.ScreenSaver'" | ( while true; do read X; if echo $X | grep "boolean true" &> /dev/null;then :; elif echo $X | grep "boolean false" &> /dev/null; then sh /home/penner/scripts/keyboard.sh; fi done ) &

vim /home/penner/scripts

#!/bin/bash/keyboard.sh

xmodmap -e "add mod3 = Scroll_Lock"
xdotool key --delay 10000 "Scroll_Lock"

You need to adjust the script paths for yourself. Might want to tweak the time outs as well. Good luck.

USDA Nutrient Data SR23 POSTGRES SQL dump

I found the mysql dump online but these days I prefer Postgres. For others who are in the same boat as me, I thought I would save you the troubles! Enjoy.

USDA Nutrient Data (SR23) Postgres Dump

I used py-mysql2pgsql and renamed all the tables to lower case.

I plan to hook this data into elastic search so that I can search on it with a rails api and return JSON. Maybe I’ll open source the elastic search rails API I’m going to build.

UPDATE:

I noticed that this data set is missing the food categories, you can seed these with this:

# db/seeds.rb

groups = [
  ["0100", "Dairy and Egg Products"],
  ["0200", "Spices and Herbs"],
  ["0300", "Baby Foods"],
  ["0400", "Fats and Oils"],
  ["0500", "Poultry Products"],
  ["0600", "Soups, Sauces, and Gravies"],
  ["0700", "Sausages and Luncheon Meats"],
  ["0800", "Breakfast Cereals"],
  ["0900", "Fruits and Fruit Juices"],
  ["1000", "Pork Products"],
  ["1100", "Vegetables and Vegetable Products"],
  ["1200", "Nut and Seed Products"],
  ["1300", "Beef Products"],
  ["1400", "Beverages"],
  ["1500", "Finfish and Shellfish Products"],
  ["1600", "Legumes and Legume Products"],
  ["1700", "Lamb, Veal, and Game Products"],
  ["1800", "Baked Products"],
  ["1900", "Sweets"],
  ["2000", "Cereal Grains and Pasta"],
  ["2100", "Fast Foods"],
  ["2200", "Meals, Entrees, and Side Dishes"],
  ["2500", "Snacks"],
  ["3500", "American Indian/Alaska Native Foods"],
  ["3600", "Restaurant Foods"]
]

groups.each do |g|
  FoodGroup.first_or_create(FdGrp_Cd: g[0], FdGrp_Desc: g[1])
end

Additionally, if you’d like some rails models for your api, this might be a good start:

# app/models/food.rb
class Food < ActiveRecord::Base
  self.table_name = "food_des"
  self.primary_key = "NDB_No"

  has_many :measures, :foreign_key => "NDB_No"
  has_one :food_group, primary_key: "FdGrp_Cd", :foreign_key =>  "FdGrp_Cd"
end

# app/models/food_group.rb
class FoodGroup < ActiveRecord::Base
  self.table_name = "fd_group"
end

# app/models/measure.rb
class Measure < ActiveRecord::Base
  self.table_name = "weight"

  belongs_to :food, primary_key: "NDB_No", :foreign_key =>  "NDB_No"
end