Tuesday, July 14, 2015
Notes from QCon New York 2015

I had the opportunity to speak--and hence, attend--QCon New York last month and thought I would share some highlights and notes from some of the talks I was able to see. Overall, this remains a high quality conference, with multiple sessions where I had to make tough choices about which talk to attend. I was not able to attend talks every session due to responsibilities from the "home front" at work, and did not always take notes (sometimes I just listened and tweeted particular points). I would definitely consider both attending and speaking at QCon again.


Chris Sanden: Automating Operational Decisions in Real-time


This talk was given by Chris Sanden, Senior Analytics Engineer at Netflix, and discussed three main areas where they apply automated analysis and machine learning in their architecture. By way of context, at this point in time, Chris said Netflix has over 700 services, with thousands of total server instances distributed across 3 AWS regions. From these instances, they collect millions of different metrics, which is plenty of data to feed to your favorite machine learning algorithm.


Automated Canary Analysis. The first area they applied machine learning of sorts was to assist with their canary deployment schemes. The idea here is that new versions of software are gradually rolled out into production, with a series of checkpoints where the new version is analyzed to make a go/no-go decision for advancing to the next checkpoint. Early on, the new version takes a small percentage of the overall service traffic, eventually having more instances deployed and taking a correspondingly greater portion of the traffic.


An interesting detail here was that they did not just run "mostly old" and "a little new" at the beginning; instead, they also designated a similarly-sized group of the old version in order to have an apples-to-apples comparison. For example, they might have 88 servers being the main production group on the old version, the canary deployment running on 6 servers with the new version, but then a third control group of 6 servers also running the old version. The canary would be compared against this control group instead of to the primary production group.


The rough scoring algorithm works as: select a set of metrics; compute the mean value for each metric for the canary and control groups every X minutes; calculate the ratio of mean values between canary and control; characterize the ratio as "high", "low", or "same"; calculate the final canary score as the percentage of metrics that match or outperform the performance of the control. They then set a go/no-go threshold for the number of passing metrics in order to progress to the next stage of deployment, with a larger canary group (and a correspondingly larger control group). They found, in practice, that up to 16% of versions were assigned a 'failing' score, although humans still have the final decision on whether or not to forge ahead or roll back. At any rate, they seemed to be satisfied that this analysis was finding about the right amount of 'things to be considered by a human'.


They did note that there were several things that made this somewhat of an art: selecting the right metrics, choosing the right frequency of analysis, choosing the right number of metrics (neither so few that you miss stuff nor so many that the analysis is prohibitively expensive); deciding whether a failing score was a false positive or not. They also noted that there were some caveats: a single outlier instance in either the control or canary group can skew the analysis (because they are using means), and their analysis did not reliably work well for canary groups smaller than six instances (which meant it was mostly useful for services with a larger number of deployed instances).


Server Outlier Detection. When you have a large number of server instances, outliers can be hidden in aggregated metrics. For example, a single abnormally slow service instance may not impact mean or 90th percentile latencies, but you still care about finding it, because your customers are still getting served by it. The next automated analysis application Chris described was specifically designed to find these outliers.


Their technique was to apply an "unsupervised machine learning" algorithm, DBSCAN (density-based spatial clustering of applications with noise) to group server instances into clusters based on their metric readings. Conceptually, if a point belongs to a cluster it should be near lots of other points as measured by some concept of "distance"; the servers that aren't close enough to a cluster are marked as outliers.


In practice, their procedure looks like:

  1. collect a window of measurements from their telemetry system
  2. run DBSCAN
  3. process the results, applying custom rules (e.g. ignore servers that have already been marked as out-of-service)
  4. perform an action on outliers (e.g. terminate instance or remove from service)

Chris noted that the DBSCAN algorithm requires two input parameters as configuration and which need to be customized for each application. Since they want application owners to be able to use their system without having in-depth knowledge of DBSCAN itself, they instead ask the application owners to estimate how many outliers they expect (based on past experience) and then auto-tune the parameters via simulated annealing so that DBSCAN finds the right number of outliers.

Anomaly Detection. Here they use machine learning to identify cases where aggregate metrics--ones that apply across all instances of a server--have become "abnormal" in some fashion. This works by training an automated model on historical metrics so that it learns what "normal" means and then loading it into a real time analytics platform to then watch for anomalies. The training happens by having service owners "tag" various anomalies as they occur in the dataset. Because what is "normal" for a service can drift over time as usage patterns and software versions change, they automatically evaluate each model against benchmark data nightly, retraining the model when performance (accuracy) has degraded, and then automatically switching to a more accurate model when one is found. They also try to capture when their users (other developers) think a model has drifted and try to make it easy to capture and annotate new data for testing.

Chris identified that bootstrapping the data set and initial model can be done by generating synthetic data and then intentionally injecting anomalies. He also mentioned that Yahoo has released an open source anomaly benchmark dataset. In addition, there are now multiple time-series databases available under open source licenses. One gotcha they have run into is that because humans are the initial source of training data (via classifying metrics as anomalous or not), then can sometimes be inaccurate or inconsistent when tagging data.

Lessons Learned. Chris indicated that the "last mile" can be challenging, and that "machine learning is really good at partially solving just about any problem." It's easy to get to 80% accuracy, but there are diminishing returns on effort after that. Of course, for some domains, 80% accuracy is still good enough/useful. Finally, he suggested not letting "Skynet out into the wild without a leash"--if the machine learning system is actually going to take operational actions, you need to make sure there are safeguards in place ("Hmm, I think I will just classify *all* of these server instances as outliers and just terminate them...") and to make sure that the safeguards have been well tested!

Mary Poppendieck: Design vs. Data

Mary Poppendieck (of Lean Software Development fame) asked: How do we get generative architectural designs that evolve properly? She cited examples from physical architecture (turns out she comes from a family filled with architects), particularly Gothic cathedrals whose construction spanned decades and even centuries in some cases. Their construction certainly spanned multiple architects and master masons. In some cases, this is obvious, such as cathedrals who have towers to the left and right of their main entrances that do not look even remotely similar. In other cases (and Notre Dame de Paris was identified as one), the overall building does have an overall consistency to it. How does this work?

Mary reviewed Christopher Alexander's (yep, the pattern language guy) "Theory of Centers" that described fifteen properties of wholeness that good architecture should have. Mary proposed that ten of these--(1) levels of scale; (2) strong centers; (3) boundaries; (4) local symmetries; (5) alternating repetition (recursion); (6) echoes (patterns); (7) positive space; (8) good shape; (9) simplicity; and (10) not-separateness (connectedness)--had analogues for software architecture. Her hypothesis is:

Learning through ongoing experimentation is not an excuse for sloppy system design. On the contrary: strong systems grow from a design vision that helps maintain "Properties of Wholeness" while learning through careful analysis and rigorous experiments.
She suggested the Android Design Principles were a good example of this concept.

Mary then moved on to propose an architectural design language set up to allow for incremental learning and development while maintaining an overarching "wholeness". The main principles were:

  1. Understand data and how to use it. Data must be central to an architecture, and "a picture is worth a thousand data points". It's important to understand the difference between analysis (examining data you already collect) vs. experimentation (specifically collecting metrics to prove or disprove a hypothesis). Everyone must be on the same team, from data scientists to engineers to architects. Mary suggested these principles echoed Alexander's properties of shape, boundaries, and connectedness.
  2. Simplify the job of data scientists. Data pipelines must be wide and fast. Experiments need design and structure. Access to data must be provided through APIs that support learning and control. Alexander parallels: space, simplicity, levels of scale.
  3. See, Think, Gain Amazing Insights. Be conversant with the best tools and analytical models. Be explicit about assumptions. Make it easy to share the search for patterns and outliers. Test insights rigorously. Alexander parallels: patterns, symmetry, recursion.

Mary classified our uses of data into four main categories: monitoring, control, simulation, and prediction. A good data architecture will support all of these, and so it must provide the following set of capabilities:

  • fast pipelines
  • data wrangling
  • analytics
  • visualization
  • designed experiements
  • machine learning
  • adaptable business systems and processes (i.e. you must be ready and able to use insights gained). [Incidentally, I suspect this is the most challenging for many businesses to achieve].

In summary, Mary suggests we have to design the entire system, not just the code. i.e. Technical architecture must also account for its data by-products and the surrounding processes we need to be able to support.

Finally, Mary had some choice quotes during the Q&A period after her talk:

Additional resources: (recommended by Mary)

Kovas Boguta & David Nolan: Demand-Driven Architecture

This talk was given jointly by Kovas Boguta and David Nolan. They correctly observed that the proliferation of different clients for many API has put a lot of pressure on the server side of the API, as the server wants to present a one-size-fits-all RESTful interface, yet the clients often need customized versions of those resources to deliver polished experiences. In particular, many clients often need to present what are essentially joins across multiple resources. With N clients, you end up with N front end teams "attacking" the service team with N different sets of demands, resulting in what they described as a "Christmas tree service." The speakers suggested this was only going to get worse, not better, with the continued proliferation of mobile and IoT devices.

David observed that RDBMSes solved a similar problem previously: building a generalized interface (SQL) and allowing clients to issue requests that were queries specifying what data they wished to receive. Of course, we know well that exposing a SQL interface is rife with security problems, but perhaps the overall pattern can still be applied with a restricted "query language" of sorts that is easier to reason about.

The principles they proposed were:

  • the client must specify exactly what it wants, no more, no less, including specifying in what shape the data is returned. The request is basically a skeleton or template of what is desired for the response.
  • composition: the demand (query) is specified as a recursive data structure, which allows for variation/substitution. They proposed a JSON-based format, so it also supports batching as a core construct (via arrays).
  • interpretation: the service interprets/decides how to satisfy the specified demand; the client should not care how data is sourced behind the covers. The query language they proposed is less expressive than SQL and is hoped to be more amenable to inspection to understand security properties.

David then showed some Clojurescript source code that used the "pull syntax" from Datomic for the query language. In this code, he was able to annotate views with the queries that were needed to populate them; this allows for full client flexibility while making maintenance tractable. David pointed out that this doesn't mean you don't need a backend; on the contrary, you still need to worry about security, routing, and caching implications.

[Jon's commentary] Netflix tackled this same problem in a slightly different way, which was to build a scriptable API facade. Client teams were still able to customize device-specific APIs via Groovy scripts built on data/services that were exposed as libraries in their API facade application. This avoids exposing a more general query interface, which makes security analysis easier, although it does still require the client teams to implement and maintain the server sides of their APIs.

Additional material (via Kovas and David):

  • Datomic provides a pull syntax and an evolvable schema; the client can trivially receive change sets to keep a dataset up-to-date
  • Relay/GraphQL from Facebook: This is a layer over react.js that provides the illusion of having a monolithic application architecture (e.g. "pretend I have a single logical database").
  • JSONGraph/Falcor from Netflix: They were able to eliminate 90% of their client-side networking code by building against a more general server API.




Jesus Rodriguez: Powering the Industrial Enterprise: The Emergence of the IOT PaaS

Jesus Rodriguez, formerly of Microsoft and now a veteran of several startups, noted that Gartner says IoT is at the peak of inflated expectations (while also noting that Gartner is responsible for a lot of IoT hype!). Jesus also noted that 70% of IoT funding rounds from 2011-2013 were related to wearables, and there was almost no investment in platforms, which he saw as open territory.

Jesus suggested that enabling enterprise-scale IoT brings several challenges: large amounts of data; connectivity; integration; event simulation; scalability; security; and real time analytics. Therefore, he thought that there was a need for a new type of platform, an IoT platform-as-a-service (IoT PaaS). He thought we would see both centralized (interactions are orchestrated by some sort of centralized hub or service) and decentralized (devices interact directly) models develop, so there was a need perhaps for multiple types of PaaS here.

In the centralized model, smart devices talk to a central hub that provides backend capabilities but also manages and controls the device topology. In the decentralized model, devices operate without a central authority. Jesus felt that in this model the smart devices would host a version of the IoT PaaS itself; in this setting I presume it would be some sort of library, framework, or co-deployed process.

For the remainder of the talk, Jesus identified several capabilities that he felt ought to be provided by an IoT PaaS, as well as providing pointers to some existing technology. Since I wasn't familiar with a lot of the technologies he mentioned, this was an exercise in "write everything down and look it up later" for me (although I was the only person who raised a hand when he asked if anyone had heard of CoAP, which I had learned about via the appendix of Mike Amundsen's RESTful Web APIs book).

Centralized capabilities

  1. device management service: managing smart devices in an IoT topology; device monitoring; device security; device ping (tech: consul.io; XMPP discovery extensions; IBM IoT Foundation device management API)
  2. protocol hub: provide consistent data/message exchange experiences across different devices. Unify management, discovery, and monitoring interfaces across different devices (IOTivity protocol plugin model; IOTLab protocol manager; Apigee Zetta / Apigee Link)
  3. device discovery: registering devices in an IoT topology; dynamically discovering smart devices in IoT network (UDP - multicast/broadcast, CoAP, IOTivity discovery APIs)
  4. event aggregation: execute queries over data streams; compose event queries; distribute query results to event consumers. complex event processing. (Apache Storm, AWS Kinesis; Azure Event Hubs and Stream Analytics; Siddhi (WSO2))
  5. telemetry data storage: store data streams from smart devices; store the output of the event aggregator service; optimize access to the data based on time stamps; offline query (time series: openTSDB, KairosDB, InfluxDB; offline: Couchbase; IBM Bluemix Time Series API)
  6. event simulation: replay streams of data; store data streams that simulate real world conditions; detect and troubleshoot error conditions associated with specific streams (Azure Event Hubs; Kinesis; Apache Storm; PubNub)
  7. event notifications: distribute events from a source to different devices; devices can subscribe to specific topics (PubNub; Parse Notifications; MQTT)
  8. real time data visualizations: map visualizations; integrate with big data / machine learning. (MetricsGraphicsJS; Graphite / Graphene; Cube; Plot.ly; D3JS)
Jesus thought that the adoption of a centralized IoT PaaS would be realized by having a standard set of services/interfaces but multiple implementations, ideally in a hosting-agnostic package. It would be important to be extensible and allow for third party integration support, while providing centralized management and governance. Jesus thought that CloudFoundry might be a good place to build this ecosystem (or at least could serve as a good model for how to do it).

Decentralized IoT PaaS Capabilities

    P2P Secure Messaging: secure, fully encrypted messaging protocol (Telehash)
  1. contract enforcement & messaging trust: express capabilities; enforce actions; maintain a trusted ledger of actions (Blockchain; Ethereum)
  2. file sharing: efficiently sending files to smart devices (firmware update); exchange files in a decentralized model; secure and trusted file exchanges (Bittorrent)

Other capabilities

As with any PaaS system, gathering historical analytics will be important. In addition, there will be a need for device cooperation (machine-to-machine, or "M2M") which gets into agent-based artificial intelligence sorts of systems.

Jesus saw the possibility for several types of companies to bring an IoT PaaS to market:

  • PaaS companies: these are already cloud-based and could provide standalone services for specific capabilities, with a focus on being easy to use and manage
  • API and integration vendors: these would have an experience advantage for integrating APIs with IoT telemetry data. Although they are missing key elements of an IoT platform, they are relatively simple to use and set up.
  • Telecom: (e.g. the Huawei Agile IoT platform). These would have a deep integration with a specific network operator and would be optimized for devices and solutions made available by the operator. However, he thought these would be complex to use and hence might not have a lot of mainstream adoption.
  • Hardware or networking vendors: (e.g. Cisco or F5). These would have a focus on networking and security and would have support integration with a specific network hardware topology.

Orchestrating Containers with Terraform and Consul

This talk was given by Mitchell Hashimoto, CEO and co-founder of Hashicorp. Hashicorp maintains several open source projects in this space (Terraform and Consul are two of them) while also selling related commercial software products. Mitchell defined orchestration as "do some set of actions, to a set of things, in a set order." The particular goal for orchestration in the context of his talk was to safely deliver applications at scale. He noted that containers solve some problems, namely packaging (e.g. Docker Image), image storage (e.g. Docker Registry), and execution (e.g. Docker Daemon), with a sidenote that image distribution might still be an open problem here.

However, containers do not solve other problems that are nonetheless important for application delivery, namely:

  • infrastructure lifecycle and provisioning: the modern datacenter interacts with cloud-based DNS, CDNs, even databases. It needs container hosts, storage, network, and external services. Infrastructure should support container creation (easy), update (hard; even harder to do update with minimal downtime), and destroy lifecycle events. Infrastructure has its own lifecycle events: canary infrastructure changes, rolling deployments, etc.
  • monitoring: needed at multiple levels from node/host level to container level to service level. This information must be able to be not only collected but propagated, as it can have utility for later/downstream orchestration actions.
  • service discovery and configuration: where is service "foo"? How do we provide runtime configuration of a service at the speed of containers, especially in an immutable world? Mitchell suggested that Chef and Puppet don't really have a good injection point for this information when running or launching containers.
  • security/identity: need to provide for service identity for secure service-to-service communication, as well as a way to store and retrieve secrets.
  • deployment and application lifecycle: there is a need to support canary deployments, rolling deployments, blue/green deployments, and others. In an immutable server setting, this requires support for "create before destroy". Users must be able to trigger a deployment and monitor an in-flight deployment.

Even as organizations adopt containers, though, there is still a need to continue to support legacy applications; the transition from non-containers to containers isn't going to be atomic, so orchestration needs to also include support for non-containerized systems. The time period for this transition will probably be years, and what about orchestration in a post-container world someday? Mitchell quoted Gartner and Forrester (~citation needed~) as estimating that the Fortune 500 would be completing their transition to virtualization...in 2015, over a decade after viable enterprise-grade virtualization became available. In other words, orchestration is an old problem; it's not caused by containers. However, the higher density and lifecycle speed of containers reveals and exacerbates orchestration problems. Modern architectures also include new patterns and elements like public cloud, software-as-a-service (SaaS), and generally a growing external footprint. Orchestration problems will continue to exist for the foreseeable future.

Terraform

Terraform solves the infrastructure piece of an overall orchestration solution, providing the ability to build, combine, and launch infrastructure safely and efficiently. As a way of illustrating what problems Terraform solves, Mitchell asked:

  • What if I asked you to create a completely isolated second environment to run an app? (e.g. QA or staging)
  • What if I asked you to deploy or update a complex application?
  • What if I asked you to document how our infrastructure is architected?
  • What if I asked you to delegate some operations to smaller teams (e.g. the distinction between "core IT" and "app IT"). Mitchell noted it is too easy to launch stuff "around" your Tech Ops teams these days, resulting in "shadow ops", so rather than fight it, find a way to achieve it well.

Terraform permits you to create infrastructure with code, including servers, load balancers, databases, email providers, etc., similar to what OpenStack Heat provides. This includes support for SaaS and PaaS resources. With Terraform, there is a single command that is used for both creating and updating infrastructure. It allows you to preview changes to infrastructure and save them as diffs; therefore, code plus diffs can be used to treat infrastructure changes like code changes: make a PR, show diffs, review, accept and merge. Terrform has a module system that allows subdividing your infrastructure to allow teamwork without risking stability; the configuration system allows you to reference the dynamic state of other resources at runtime, e.g. ${digitalocean_droplet.web.ipv4_address}. Its configuration system is human-friendly and JSON compatible; as a text format it is version-control friendly. Since the configuration is declarative, this allows Terraform to be idempotent and highly parallelized; the diff-based mechanism means that Terraform will only do what the plan says it will do, allowing you to examine what it will do ahead of time ("make -n", anyone?) as a clear, human-readable diff.

Consul

Consul is "service discovery, configuration, and orchestration made easy." It is billed as being distributed, highly-available, and datacenter-aware. In a similar fashion to his description of Terrform, Mitchell identified several questions that Consul can answer for you:

  • Where is service foo?
  • What is the health status of service foo?
  • What is the health of machine or node foo?
  • What is the list of all currently running machines?
  • What is the configuration of service foo?
  • Is anyone currently performing operation foo?

Consul offers a service lookup mechanism with both HTTP and DNS-based interfaces. For an example of the latter:

$ dig web-frontend.service.consul. +short
10.0.3.89
10.0.1.46
Consul can work for both internal and external services (the external ones can be manually registered), and incorporates failure detection/health-checking so that DNS won't return non-healthy services or nodes (the HTTP interface offers more detailed information about the overall health state of the managed catalog). Health checks are carried out via local agents; the health check can be an arbitrary shell script. Participating nodes then gossip health information around to each other.

Consul can work for both internal and external services (external ones can be explicitly registered). Consul incorporates failure detection so that DNS lookups won't return non-healthy services or nodes. The HTTP API also includes endpoints to list the full health state of the catalog of services and nodes. Health checks are run locally and take the form of executing a shell script.

Consul provides key/value storage that can be used for application configuration, and watches can be set on keys (via long poll) to receive notification of changes. Consul also provides for ACLs on keys to protect sensitive information and allow for multi-tenant use. Mitchell suggested that the type of information best suited for Consul should power "the knobs you want to turn at runtime" such as port numbers or feature flags. There is an auxiliary project called consul-template that can regenerate configuration files from templates whenever underlying configuration data changes.

Consul provides multi-datacenter support as well, although from what I understood, each datacenter is essentially its own keyspace, as the values are set by the strongly consistent Raft protocol, which generally don't run well across wide-area networks (WANs). Key lookups are local by default but the local datacenter Consul masters can forward requests to other datacenters as needed, so you can still view keys and values from all datacenters within one user interface.

In addition to basic key/value lookup, Consul also supports events that can be published as notifications, as well as execs, which are conceptually a "scalable ssh for-loop" in Mitchell's words. He said there are pros and cons to using each of events, execs, and watches, but that when used in appropriate settings they have found they can scale to thousands of Consul agents.

Camille Fournier: Consensus Systems for the Skeptical Architect

Camille Fournier, CTO of Rent the Runway (RTR), subtitled her talk "Skeptical Paxos/ZAB/Raft: A high-level guide on when to use a consensus system, and when to reconsider". Camille rhetorically asked: if new distributed databases like Riak, Cassandra, or MongoDB don't use a standalone consensus system like ZooKeeper (ZK), etcd, or Consul, are the latter consensus systems any good? She pointed out that the newer distributed databases are often focused on: high availability, where strong consistency is a tradeoff; fast adoption to pursue startup growth (e.g. "don't ask me to install ZooKeeper before installing your distributed database"); and were designed from the ground up as distributed systems by distributed systems experts. She also shared that RTR does not use a standalone consensus system, largely because their business needs and technical environment either don't require or aren't suitable for such a system. In the remainder of the talk, Camille shared some evaluation criteria that teams and organizations can use when trying to decide if systems like ZK et al. are a good fit.

First, we should evaluate where the system would run. If the environment does not require operational support for rapid growth and rapid change, then a standalone consensus system may be overkill. Consensus systems are often used to provide distributed locks or service orchestration for large distributed systems, but in Camille's words, "you don't always have an orchestra; sometimes you have a one-man band." Simpler alternatives to distributed service orchestration include load balancers or DNS; locks can be provided databases (just use a transactional relational database or something like DynamoDB that supports strongly consistent operations).

Second, we should consider what primitives are needed for our application. Consensus systems provide strong consistency and several have support for ephemeral nodes (disappear when a client session disconnects) and notifications or watches. Different consensus systems provide these to different degrees, and later in the talk Camille summarized some of these tradeoffs, which I captured below. Perhaps her strongest point in this section is that consensus systems are not really a key/value store per se; they are designed to point to data, not to contain it. You can use them for limited pub-sub operations, but...you can also fix things with duct tape and bailing wire.

Camille also provided some analysis about the similarities and differences between ZK and etcd to illustrate some of the subtleties involved with choosing one or the other. Both obviously use a proven consensus algorithm (ZAB for ZooKeeper, Raft for etcd) to provide consistent state. With ZooKeeper, clients maintain a stateful connection to the cluster. While this can be powerful, it can be hard to do right--the ZooKeeper client state machine is complicated, and Camille recommended using the Curator library for Java instead of writing your own. This ensures a single system image per client. On the other hand, etcd has an HTTP-based interface, which is easy to implement and does not require complex session management. However, you must pay for the overhead of the HTTP protocol, and if you use temporary time-to-live nodes, you have to implement heartbeats/failure detection in your clients; achieving a "single system image" requires more work. On the other hand, ZK watches do not guarantee that you will see the intermediate states of a watched node that undergoes multiple changes, whereas the etcd watches are provided via longpoll and can also show the change history within a certain timeframe.

Camille then wrapped up with a number of common challenges that can be faced when deploying consensus systems.

Odd numbers rule
Use 3 or 5 cluster members; there's no value in just having four, as you still need 3 available to gain a majority. This requires more servers to be up than with 3 cluster members while tolerating fewer failures than 5 cluster members--the worst of both worlds.
Clients can run amok
Camille also phrased this as "multi-tenancy is hard" and "hell is other developers." She suggested potentially not sharing the same consensus system deployment to guarantee resource isolation, doing lots of code review, and providing wrapper libraries for clients to ensure good client behavior.
Garbage collection (GC) and network burps
This is a warning about lock assumptions. Many distributed locks are advisory and are based on the concept of temporary leases that must be successfully renewed by the holder or the lock gets released. In some cases, a GC pause or network partition can exceed the lock lease timeout, which can result in two systems thinking they both hold the lock. Dealing successfully with this challenge requires validating lock requests at the protected resource in order to detect out-of-order use. Despite strong consistency, the realities of the physical world mean that ZK et al. can only provide advisory locking at best.
Look for the blood spatter pattern
Bugs will be in the features no one uses or the things that happen rarely. Camille shared a story where when she first tried to use ACLs and authentication in ZooKeeper--a documented feature--she found it didn't work at all, because no one actually used it!
Consensus Owns Your Availability
Per the CAP theorem, there can be an availability tradeoff for consensus systems during partitions, even if the individual cluster members are up. If you make your application's availability dependent on the consensus system's availability, consensus can become a single point of (distributed!) failure.

Saturday, February 7, 2015
Does jQuery expose a monadic interface?

Pam Selle raised an interesting question on her blog recently: does jQuery expose a monadic interface? I'll interpret this as specifically asking: are jQuery containers monads? This was such an interesting question that I decided to investigate it more thoroughly; one can never have too much practice figuring out how monads work, in my experience!

There are a couple of formulations of monads; both have in common that there is a container type M that "wraps" plain data values in some fashion. Here, M is the type of jQuery containers, and the plain data values are DOM elements. Some monads are polymorphic, in that they can wrap values of a wide variety of underlying data types (lists and Option/Maybe are good examples of these). jQuery containers are also polymorphic, although we usually just apply them to DOM elements. For the rest of this post, we'll refer to C as the type of a jQuery container, and when it contains items of type T then we'll write C[T].

Monadic operators


Now, in order to qualify as a monad, a type has to support certain operations with particular signatures, and then those operations have to obey certain "monadic laws" in order to be correctly composeable.

Both formulations share a requirement for a simple wrapping function called return or unit that take plain data values (DOM elements) and return values of the monadic type (jQuery containers, or C). In other words, we're looking for an operation / "constructor" of type:

unit : T → C[T]

If e is a DOM element, then $(e) returns a jQuery container with that one element in it; ditto for $(a) if a is an array of DOM elements. We can then understand all the other selectors as syntactic sugar for one or the other of these. For example, $("#foo") is equivalent to $(document.getElementById("foo")). In fact, we can see that even just $(elt) for a single element is equivalent to $([elt]) (i.e. wrapping the element in a single element array). This will be convenient for the rest of this post, since it means we just have to deal with this simplest constructor. At any rate, it seems like we have the unit operation covered.

Now, in one of the formal definitions of monad, the other operation required is one called bind that can take a value of the monadic type, a function from the underlying data type to another value of the monadic type, and returns a value of the monadic type again. That's a mouthful, so it might be helpful to look at the type of this operator:

bind : C[T] → (T  C[U]) → C[U]

Conceptually, this takes every contained item of type T, applies a transformation to it that maps to a new set of containers, and then "squashes" it down into one container again. As is common with object-oriented language implementations, the this variable can be thought of as an implicitly-passed parameter, so we can then look through the API for a jQuery container looking for a method that takes one of these transformation callbacks and returns a new jQuery container. One such candidate is the .map() method, which is defined as:
Pass each element in the current matched set through a function, producing a new jQuery object containing the return values.
Wow, this looks pretty good; it takes a transformation function and returns us a jQuery container at the end. The real question is whether it will "flatten" things down into a single jQuery container for us (since the documentation doesn't say) if our function returns jQuery containers. Suppose we have the following markup embedded in a page:
<div id="example">
  <div class="outer"><div id="one" class="inner"></div></div>
  <div class="outer"><div id="two" class="inner"></div></div>
</div>
Now, suppose we try this in the Javascript console:
> $(".outer").map(function (idx,elt) { return $(elt).children(); });
We can see that this does, in fact, squash down to just a container with the two div.inner elements in it, as opposed to a container containing two containers. Nice!

Monad Laws


We have our candidate operations for unit and bind, so we have to check whether they adhere to certain properties; these properties are similar in spirit to the commutative or associative properties of addition (or, perhaps better, the distributive law of multiplication over addition, since that expresses a relationship between two operations). For the below notation, the bind operator is written as >>=. The first law says:

(unit x) >>= f   ≅   f x

Or, to put things into jQuery syntax: $(x).map(f) should be the same as f(x). If f() can take a DOM element and return a jQuery container, then we can see from our test above that we're in good shape here. The second law says:

m >>= return   ≅   m

which is to say that if m is a jQuery container, then it should be the case that m.map(function (idx,item) { return $(item); }) gives us m back again, which also seems right. Finally, the third law says:

(m >>= f) >>= g   ≅   m >>= ( \x→ (f x >>= g))

Or, that if m is a jQuery container, and f and g are transformer functions, that the following are equivalent:

m.map(f).map(g)
m.map(function (idx,x) { return f(x).map(g); })

Ok, that's a mouthful--or at least a keyboardful. What we are doing in the first line is taking each element in m and applying f to it, squashing this into one container, then taking each element of that collection and then applying g to it, then flattening everything down into one colleection. In the second line, we are taking each element in m, and then applying the given function to it, but we can see that the body of the function does the same thing: namely, applying f to the element (which returns a container, remember), and then mapping g across all those elements. Because .map() does squashing for us, we end up with the equivalent containers at the end.

And that's it! It does look like jQuery containers are a monad after all. We can actually understand several methods of jQuery containers as convenient applications of .map(). For example, .children() is really equivalent to:

.map(function (idx,elt) {
       return $(elt.childNodes);
     });

[I suspect that the actual implementation of .children() ends up being more efficient, as it doesn't have to construct all the intervening jQuery containers and then squash their arrays back together again.]

Exercise for the reader: Can you find an expression of jQuery containers as a monad in their other formulation, with return, fmap, and join? (I ran out of time to try this before I had to make dinner for my kids!)

The Upshot


"So what?" you might ask. Well, it suggests that if you are writing a library meant to be used with jQuery, you might be well served to write many of your utility functions in the form of the monadic transform functions we saw above, taking DOM elements and returning jQuery containers--i.e. making sure your utility functions can be passed as arguments to .map(), because it means that they will be composeable in very flexible ways that let them be chained together easily.

Saturday, January 17, 2015
Installing Ubuntu on an old Macbook

[Update: This post is turning into a lab notebook more than anything, and I'm mainly just recording it here as someplace convenient, with the idea that someone *might* find it useful someday.]

I have a 2008-vintage Macbook for which OS X no longer seems to be a good option: I managed to get it upgraded to OS X 10.7 (which was not straightforward), but I think the hardware is a little underpowered at this point. In fact, upgrades to later versions of OS X are not even supported.

Since I don't otherwise have a personal laptop and occasionally need something for writing/email/lightweight development, I figured I'd give Linux a shot, having had previous experience of getting extended service lifetime out of old hardware. Plus it had been a while since I'd monkeyed around with installing Linux; I was curious how hard or easy this was (bearing in mind that I was highly confident it would be easier than installing Slackware off a stack of 3.5" floppies). I gave Ubuntu a shot.

Long story short: I managed to get it working (mostly), but it required a lot of trial-and-error (and multiple reinstall-from-scratch passes). Since this was hardware I was going to retire and I was working on it in my spare time [with 4 kids, this stretched out over weeks!], I didn't mind, but it wasn't entirely straightforward. The documentation was relatively good, although somewhat contradictory, likely because it was from several sources that did asynchronous evolution.

Some things I encountered:

  • Manually configuring my partitions with gparted didn't work the first go-round, I think due to insufficient guidance and my inability to remember (know?) limitations on needing to boot from primary partitions, and how many maximum primary partitions you're allowed, etc. When I freed up the space and left it unallocated, and let the Ubuntu install do whatever it wanted, it worked.
  • The latest supported Ubuntu release for my hardware according to the community docs was 13.04 (Raring Ringtail); at the time of this experiment, the latest supported was 14.10 (Utopic Unicorn), so I was already four releases behind the release train. Although I was able to install 13.04, even an upgrade to 13.10, the next release, no longer worked with my hardware (kernel panics on boot). Raring is already EOL, too: the latest updates to the mirrors were over a year ago at the time of this writing, even for security updates. Caught between retiring perfectly serviceable hardware and not having security updates is not a fun place to be. The reality  is that I could buy a replacement laptop if I really needed one, but I suspect there are lots of folks with hand-me-down hardware and/or insufficient means to buy newer that would just be stuck. At least I can nmap myself, shut down all my ports, and hunker down. If push comes to shove I used to know how to build kernel images from source, but again, that's definitely not for everyone. Tough problem: I know forward progress sometimes requires retiring support for old hardware, but there's a long tail out there.
  • Because my Ubuntu release was EOL, I had to manually configure my apt sources.list file, through a little bit of trial and error.
  • Got Chrome installed and hooked into updates, but could not get the Dropbox client to survive long enough to sign in. My guess is I'm so ancient the single .deb release they have isn't really being tested vis-a-vis the installed library ecosystem I have.
  • Once I had my sources.list updated, I was able to install missing software and even apply updates. I initially tried just applying the security-related updates, but this introduced some instability/random crashes of stuff, until I went ahead and applied *all* the available updates. So far so good after that <crosses fingers>.
  • Ended up buying a replacement battery off EBay, around $20, which I thought was a reasonable investment for a few more years' (hopefully) service lifetime. I'll probably buy an el cheapo USB mouse with multiple buttons, because the trackpad has gotten desensitized at this point and clicking is kinda hard--and pounding on the trackpad button, while satisfying when in a bad mood, gets old fast. I still haven't done the research to see what the maximum memory the motherboard supports is to see if a memory upgrade is possible; currently I have a paltry 2GB in it, which it seems to be ok with for the moment. But I think it's probably a good bang-for-the-buck investment to keep in mind.
Ultimately, a fun project, and the thrifty part of me likes getting some extended use out of the laptop. It appears to be able to run Minecraft reasonably well (and better than the OS X installation on the same hardware!), which will make it a big hit with the kids, and it's handy being able to have something serviceable for lightweight use when I'm on a trip where using a work laptop wouldn't be appropriate.

Notes for myself for later/lab notebook

(I won't rule out the possibility of needing to reinstall from scratch at some point, but I'll probably have forgotten all this stuff by then!)
18 January 2015: still having kernel panics on resume/wake up. This bug seems to blame the wireless driver, which means I am going to need a new kernel somehow. Since Raring Ringtail is EOL, this either means building my own kernel, possibly with a patched source (ugh) or something else. I'm somewhat tempted to attempt an install of the most recent LTS Ubuntu release just to see if it will work with this hardware or not; since I don't have anything important on the laptop anymore, blowing away the Linux install is still a viable option. Also suggests I should really be thinking of using this mostly as a netbook whose hard disk is mostly just temporary storage until I can push stuff out to the network sometime.

Went with 14.04.1 LTS (Trusty Tahr): so far this seems to be stable across sleep/restart. Default trackpad settings are off, though; they required pushing too hard on the trackpad. Found the settings to fix this:

$ xinput set-prop appletouch "Synaptics Finger" 10 50 0

Still need to work out how to make that trackpad setting persist across restarts. Apparently adding the above command as /etc/X11/Xsession.d/85custom_fix-trackpad may do that. Yup!

Success! Seems like everything works ok now: suspend/resume, trackpad, wireless, even Chrome and Dropbox. So, perhaps I misinterpreted the Ubuntu-on-Macbook support matrix? Possible, but the columns are labelled "most recent release" and "most recent LTS release", which led me to think those were the maximum versions that would work. Not so, apparently. But anyway, seem to have a working, semi-stable laptop now (although I discovered that Firefox could trigger a kernel crash on certain pages; not really a big deal since I plan to use Chrome mostly).