python-dateutil gotcha

A quick note: Don’t naively use dateutil.parser.parse() if you know your dates are ISO8601 formatted. Use dateutil.parser.isoparse() instead.

In [1]: from dateutil.parser import *
In [2]: datestr = '2016-11-10 05:11:03.824266'
In [3]: %timeit parse(datestr)
The slowest run took 4.78 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 211 µs per loop
In [4]: %timeit isoparse(datestr)
100000 loops, best of 3: 17.1 µs per loop

Advertisements

On Lifelong Learning

One of my goals is lifelong learning.

Since finishing school over a decade ago, most of my learning has taken place on the job or through reading technical books in the last decade.  I often feel that books provide great surveys and details on theory (which are important!) but they’re light on useful information for practitioners.

Coursera

Coursera was the first experience I had with online classes. I’ve done three courses on Coursera: Computer Networks (available here), Algorithms Part I, and Algorithms Part II.  These are college courses that are available online. The Coursera experience worked well for me by introducing broad, completely new concepts with world class lecturers. For me, there wasn’t much practical value in the assignments/exercises because of the time commitment required.

Coursera is great to introduce a broad new set of topics and most importantly, learn how to ask questions about the subject.

I didn’t attend a top-tier engineering school, but online courses helped me get through interview processes at “big” tech firms.

YouTube/Twitch

YouTube and Twitch are amazing resources for learning new skills. Some of the  GDB debugging videos are what I’ve been watching lately!

lynda.com

Our local public library partners with lynda.com- the courses there are free. This is a tremendous, low-risk opportunity to take guided tours of new technologies!

See previous learning posts from 2009, 2012.

Book Review: Designing Data-Intensive Applications

book-cover

Some days I feel that I’ve returned to my old DBA role, well… sort of. A lot of my day to day work revolves around a vast amount of data and applications for data storage and retrieval. I haven’t been a DBA by trade since 2011, nearly an eternity in the tech industry.

To me, Designing Data-Intensive Applications was a wonderful survey of “What’s changed?” as well as “What’s still useful?” regarding all things data. I found Designing had practical, implementable material, especially around newer document databases.

This book is a good balance of theory and practice. I particularly enjoyed the breakdown of concurrency related problems and adding concrete names to classes of issues I’ve seen in practice (e.g., phantom reads, dirty reads, etc).

The section about distributed systems theory was well done but it was a definite lull in the rest of the book’s flow.

The book’s last section and last chapter were both strong. I would recommend this book to all of my colleagues.

Labels on Cloud Dataflow Instances for Cloud BigTable Exports

Google Cloud Platform has the notion of resource labels which are often useful in tracking ownership and cost reporting.

Google Cloud BigTable supports data exports as sequence files. The process uses Cloud Dataflow. Cloud Dataflow ends up spinning up Compute Engine VMs for this processing, up to –maxNumWorkers.

I wanted to see what this was costing to run on a regular basis, but the VMs are ephemeral in nature and unlabelled. There’s an option not mentioned on the Google documentation to accomplish this task!

$ java -jar bigtable-beam-import-1.4.0-shaded.jar export --help=org.apache.beam.runners.dataflow.options.DataflowPipelineOptions
org.apache.beam.runners.dataflow.options.DataflowPipelineOptions:
 Options that configure the Dataflow pipeline.

--labels=<Map>
 Labels that will be applied to the billing records for this job.

Marvelous!

Looking at PostGIS intersects queries

I’ve been looking hard at datastores recently, in particular PostgreSQL + PostGIS. In school, I did a few GIS related things in class and I assumed it was magic. It turns out there is no magic.

At work, we have an application that accepts polygons and returns objects whose location are within that polygon. In GIS-speak this is often called an Area of Interest or AOI and they’re often represented as a few data types: GeoJSON, Well-known Text and Well-known Binary (WKT/WKB).

So what happens when the request is accepted and how does PostGIS make this all work?

Here’s a graphical representation of my GeoJSON (the ‘raw’ link should show the GeoJSON document):

And the same as WKT:

In the PostGIS world, geospatial data is stored as a geometry type.

Here’s a query returning WKT->Geometry:

The PostGIS geometry datatype is an lwgeom struct.

So, to find all of the objects in a given AOI, the SQL query would look something like this (PostGIS FAQ):

OK, but what does the && operator do? Thanks to this awesome dba.stackexchange answer, we finally get some answers! In a nutshell:

  • && is short for geometry_overlaps
  • geometry_overlaps passes through several layers of abstraction (the answer did not show them all)
  • in the end, floating point comparisons see if the object in the database’s polygon is within the requested polygon

 

GPG Suite and gpg-agent forwarding

I had to fight a little with GPG Suite (Mac) and forwarding gpg-agent to a Ubuntu 16.04 target. This post describes what ended up working for me.

Source Machine:

  • macOS Sierra 10.12.6
  • GPG Suite 2017.1 (2002)

Target Machine:

  • gpg2 installed on Ubuntu 16.04 (sudo apt-get install gpg2 -y)

Source machine:

File: ~/.gnupg/gpg-agent.conf

Add this lines:

extra-socket /Users/[user]/.gnupg/S.gpg-agent.remote

File: ~/.ssh/config

Add these lines:

RemoteForward /home/[user]/.gnupg/S.gpg-agent /Users/[user]/.gnupg/S.gpg-agent.remote
ExitOnForwardFailure Yes

Restart gpg-agent:

gpgconf --kill gpg-agent
gpgconf --launch gpg-agent

Destination Machine:

File: /etc/ssh/sshd_config

Add this line:

StreamLocalBindUnlink yes

Restart sshd:

sudo systemctl restart sshd

 

This article helped a lot while I was troubleshooting.

Book Review: Mastering Ansible

While debugging an Ansible playbook, I tweeted about variable precedence:

https://twitter.com/andyhky/status/734859666337861633

Through that tweet I learned how to spell mnemonic AND I got a recommendation to buy Mastering Ansible.  This blog post is a quick review of Mastering Ansible.

Overall, I enjoyed the book’s format more than anything else. The flow of objective/code/screenshot really made the concepts gel (using herp/derp in examples is a close second!). I’m an advanced Ansible user (over 3 years), and this book still showed me new parts of Ansible. The book had very useful suggestions for playbook optimization and simplification. Mastering Ansible also provided some nifty tips and tricks like Jinja Macros, ansible-vault with a script, and merging hashes that I didn’t know were possible.

Of course the book answered my variable precedence question – another thing it did was provide some clarity around using variables with includes which was what I really needed.

If you’re just getting your feet wet with Ansible or you’ve been using it for years, Mastering Ansible is worth picking up.

F5 Rolling Deployments with Ansible

Rolling deployments is well covered in the Ansible Docs. This post is about using those rolling deployments patterns but with F5 Load Balancers.  These techniques use the F5 modules require BIG-IP software version >= 11 and have been tested with Ansible 1.9.4.

There are a couple of things to examine before doing one of these deployments:

  • Preflight Checks
  • Ansible’s pre_tasks / post_tasks

Preflight Checks

If you’re using a pair of F5s for High Availability, consider having a preflight check play to ensure your configurations are in sync and determine your primary F5:


- name: Collect BIG-IP facts
  local_action: >
    bigip_facts
      server={{ item }}
      user={{ f5_user }}
      password={{ f5_password }}
      include=device_group,device
      with_items: f5_hosts

- name: determine primary F5
  set_fact: primary_lb="{{ device[item]['management_address'] }}"
  when: device[item]['failover_state'] == 'HA_STATE_ACTIVE'
  with_items: device.keys()

- name: fail when not in sync
  fail: msg="not proceeding, f5s aren't in sync"
  when: device_group['/Common/device_trust_group']['device']|length > 1 and
device_group['/Common/device_trust_group']['sync_status']['status'] != 'In Sync'

Recommendation: Use Automatic Sync for doing automated operations against a pair of F5s. I’ve listed the failsafe here for the sake of completeness.

Beware: the bigip_facts module sets facts named “device_group, device” or any other parameter you pass to ‘include’.

pre_tasks & post_tasks

The pre_tasks section looks similar- disable/drain. There’s a new variable introduced- more on that at the end of this post.

pre_tasks:
  - name: disable node
    local_action: >
      bigip_node: server={{ primary_lb }}
                  user={{ f5_user }}
                  password={{ f5_pass }}
                  name={{ inventory_hostname }}
                  state=disabled

   - name: wait for connections to drain
     wait_for: host={{ inventory_hostname }}
               state=drained
               exclude_hosts={{ ','.join(heartbeat_ips) }}
               port=443

The post_tasks section isn’t much different. No new variables to worry about, just resume connections and re-enable the node.

post_tasks:
   - name: wait for connections to resume
     wait_for: host={{ inventory_hostname }}
               state=started
               port=443

  - name: enable node
    local_action: >
      bigip_node: server={{ primary_lb }}
                  user={{ f5_user }}
                  password={{ f5_pass }}
                  name={{ inventory_hostname }}
                  state=enabled

A note about heartbeat_ips:

heartbeat_ips. These are the source IP addresses that the F5 uses to health check your node. heartbeat_ips can be discovered on each F5 by navigating to Network -> Self IPs.  If you don’t know how to find them, disable your node in the F5 and run something like tcpdump -i eth0 port 443. The heartbeat IPs can be discovered by adding “self_ip” to the bigip_facts module’s include parameter – usually represented like this:

hostvars['localhost']['self_ip']['/Common/Self-IPv4']['address']

There are two downsides discovering heartbeat IPs:

1) They have to be gathered from each F5 and concatenated together. The way the bigip_facts module operates this requires use of a register. (it gets messy)

exclude_hosts={{ f5_facts.results|join(',', attribute='ansible_facts.self_ip./Common/internal-self.address') }}

2) The bigip_facts gathering module is pretty time consuming- I’ve seen up to 30s per F5 depending on the amount of data returned. Storing these IPs elsewhere is much faster =)

Giving FlowBAT a try

FlowBAT is a graphical flow based analysis tool.

FlowBAT is a very snappy UI which wraps flow collection software called SiLK. My interests with FlowBAT and SiLK are around using Open vSwitch to push sFlow data to SiLK and FlowBAT to quickly query that data. Hopefully I’ll get around to posting about OVS, sFlow, SiLK and FlowBAT.

While I was getting started with FlowBAT (and SiLK) I saw there weren’t any ansible roles for either component. The FlowBAT team maintains some shell scripts to install and configure FlowBAT and the SiLK project has SiLK on a box. I ported both of these projects to ansible roles and posted them to ansible-galaxy.

To give them both a try in a Vagrant environment, check out vagrant-flowbat.

Remediation as a Service

First aid kit  - Marcin Wichary - https://www.flickr.com/photos/mwichary/2615558474/in/photolist-4Z8qpL-7Z2Aon-61y7dV-73zHPJ-rAAn29-ftN7ov-7K7eX7-dkMGqP-dkMK3d-d6gZUf-d6gZrd-d6gY8U-d6h1kS-d6gYm5-d6gYSA-d6h1ym-dkMKWj-dkMKu7-dkMJGW-dkMGNx-dkMGur-dkMJXq-dkMGyH-dkMJM3-9uDPHq-6SGJBh-qLJeq-cW23T-dkMJEH-dkMJPs-7KHEAG-41rcH-nL2vEf-59FN23-dkMHb8-dkMHsD-dkMHjv-dkMK8q-dkMHfH-dkMHBM-dkMJPT-dkMHMV-dkMKMY-dkMKCY-dkMJeM-dkMLoQ-dkMJtR-dkMLC1-dkMJWz-e49WqS
First aid kit – Marcin Wichary

I’ve seen a couple of automated remediation tools get some coverage lately:

And both have received interesting threads on Hacker News.

One HN commenter that stood out (bigdubs):

I don’t like being the voice of the purist in this, but this seems like a bandaid on a bullet wound.

For most of the cases where this would seem to be useful there is probably a failure upstream of that usage that should really be fixed.

The system’s operators have to keep the system up and running. If the bug can’t be diagnosed in a sane amount of time but there’s a clear remediation, the right choice is to get the system up and running.

A lot of operations teams employ this method: If Time to Diagnose > Time to Resolve Then Workaround. This keeps the site up and running, but covering a system in bandaids will lead to more bandaids and their complexity will increase.

The commenter has a point – if everything’s band-aided, the system’s behavior is unpredictable.  Another bad thing about automated remediation is that they can further the divide between developers and operators. With sufficient automated remediation, in many cases operators don’t have to involve developers for fixes. This is bad.

Good reasons to do automated remediation:

  • Not all failures can be attributed to and fixed with software (broken NIC, hard drive, CPU fan)
  • Most companies do not control their technology stack end to end; this also means shipping your own device driver or modified Linux kernel
  • For the technology stack that is under a company’s control, the time to deploy a bugfix is greater than the time to deploy an understood temporary workaround

There are opportunities for improving operations (and operators lives) by employing some sort of automated remediation solution, but it is not a panacea. Some tips for finding a sane balance with automated remediation:

  • Associate issues from an issue tracker and quantify the number of times each issue’s workaround is executed – no issue tracker, no automated remediation
  • Give the in-house and vendor software related workarounds a TTL
  • Mark the workarounds as diagnosed/undiagnosed, and invest effort in diagnosing the root cause of the workaround
  • Make gathering diagnostics the first part of the workaround

Finally – consider making investments in technology which enable useful diagnostic data to be gathered quickly and non disruptively. One example of this is OpenStack Nova’s Guru Mediation reports.

Troubleshooting LLDP

why

LLDP is a wonderful protocol which paints a picture of datacenter topology. lldpd is a daemon to run on your servers to receive LLDP frames outputs network location and more.  There’s also a recently patched lldp Ansible module.

Like all tools, using LLDP/lldpd has had some issues. Here’s the ones I’ve seen in practice, with diagnosis and resolution:

Switch isn’t configured to send LLDP frames

Diagnosing:

tcpdump -i eth0 -s 1500 -XX -c 1 'ether proto 0x88cc'

Switches will send the LLDP frames by default every 30s. The switchport’s configuration needs to enable LLDP.

Host isn’t reporting LLDP frames

Generally, this means lldpd isn’t running on the server. If the lldp frames are arriving (from the above tcpdump), but lldpctl will returns nothing.

Diagnosing:

lldpctl # returns nothing
pgrep -f lldpd # returns nothing
service lldpd restart

Be sure that the lldpd service is set to run at boot and take a look at configuration options.

NIC is dropping LLDP frame

By far the most frustrating- NIC firmware issues which can cause the NIC to drop lldp frames. (Page 10, item 14)

The way this one manifests:

  • lldpctl reports nothing
  • lldpd is running
  • switch is configured to send LLDP frames

Diagnosing:

Run a packet capture on the switch to ensure that the LLDP frames are being sent to the port. If you’re able to see the frame go out on the wire and traffic is otherwise functioning normally to the host, the problem lies with the NIC.

The fix here was to apply the NIC firmware upgrade- after that, lldp was good to go!