Sunday, 28 February 2016

Apache Thrift ™

Ruby Tutorial


Introduction

All Apache Thrift tutorials require that you have:
  1. Built and installed the Apache Thrift Compiler and Libraries, see Building from source for more details.
  2. Generated the tutorial.thrift and shared.thrift files as discussed here.
    thrift -r --gen rb tutorial.thrift
    
  3. Followed all prerequisites listed below.

Prerequisites

Client

$:.push('gen-rb')
$:.unshift '../../lib/rb/lib'

require 'thrift'

require 'calculator'

begin
  port = ARGV[0] || 9090

  transport = Thrift::BufferedTransport.new(Thrift::Socket.new('localhost', port))
  protocol = Thrift::BinaryProtocol.new(transport)
  client = Calculator::Client.new(protocol)

  transport.open()

  client.ping()
  print "ping()\n"

  sum = client.add(1,1)
  print "1+1=", sum, "\n"

  sum = client.add(1,4)
  print "1+4=", sum, "\n"

  work = Work.new()

  work.op = Operation::SUBTRACT
  work.num1 = 15
  work.num2 = 10
  diff = client.calculate(1, work)
  print "15-10=", diff, "\n"

  log = client.getStruct(1)
  print "Log: ", log.value, "\n"

  begin
    work.op = Operation::DIVIDE
    work.num1 = 1
    work.num2 = 0
    quot = client.calculate(1, work)
    puts "Whoa, we can divide by 0 now?"
  rescue InvalidOperation => io
    print "InvalidOperation: ", io.why, "\n"
  end

  client.zip()
  print "zip\n"

  transport.close()

rescue Thrift::Exception => tx
  print 'Thrift::Exception: ', tx.message, "\n"
end

Server

$:.push('gen-rb')
$:.unshift '../../lib/rb/lib'

require 'thrift'

require 'calculator'
require 'shared_types'

class CalculatorHandler
  def initialize()
    @log = {}
  end

  def ping()
    puts "ping()"
  end

  def add(n1, n2)
    print "add(", n1, ",", n2, ")\n"
    return n1 + n2
  end

  def calculate(logid, work)
    print "calculate(", logid, ", {", work.op, ",", work.num1, ",", work.num2,"})\n"
    if work.op == Operation::ADD
      val = work.num1 + work.num2
    elsif work.op == Operation::SUBTRACT
      val = work.num1 - work.num2
    elsif work.op == Operation::MULTIPLY
      val = work.num1 * work.num2
    elsif work.op == Operation::DIVIDE
      if work.num2 == 0
        x = InvalidOperation.new()
        x.whatOp = work.op
        x.why = "Cannot divide by 0"
        raise x
      end
      val = work.num1 / work.num2
    else
      x = InvalidOperation.new()
      x.whatOp = work.op
      x.why = "Invalid operation"
      raise x
    end

    entry = SharedStruct.new()
    entry.key = logid
    entry.value = "#{val}"
    @log[logid] = entry

    return val

  end

  def getStruct(key)
    print "getStruct(", key, ")\n"
    return @log[key]
  end

  def zip()
    print "zip\n"
  end

end

handler = CalculatorHandler.new()
processor = Calculator::Processor.new(handler)
transport = Thrift::ServerSocket.new(9090)
transportFactory = Thrift::BufferedTransportFactory.new()
server = Thrift::SimpleServer.new(processor, transport, transportFactory)

puts "Starting the server..."
server.serve()
puts "done."

Tuesday, 23 February 2016

Rspec formaters

# Add multiple formatters to formater
RSpec.configure do |c|
c.add_formatter(:documentation)
c.add_formatter(:json)
c.add_formatter(:progress)
end
json_formatter = RSpec::Core::Formatters::JsonFormatter.new($stdout)
{"version":"3.4.1","messages":["Run options:\n include {:focus=>true}\n exclude {:slow=>true}"],"examples":[{"description":"should be 200 OK","full_description":"Easybring::APIv1 GET businesses should be 200 OK","status":"passed","file_path":"./spec/endpoints/businesses_spec.rb","line_number":20,"run_time":2.603616,"pending_message":null},{"description":"should be an Array","full_description":"Easybring::APIv1 GET businesses should be an Array","status":"passed","file_path":"./spec/endpoints/businesses_spec.rb","line_number":24,"run_time":0.130511,"pending_message":null}],"summary":{"duration":3.407043,"example_count":2,"failure_count":0,"pending_count":0},"summary_line":"2 examples, 0 failures"}
view raw gistfile1.txt hosted with ❤ by GitHub

TODO: Drone is a Continuous Delivery platform built on Docker, written in Go

TODO: Drone is a Continuous Delivery platform built on Docker, written in Go

https://github.com/drone/drone

SQlite does not have a specific datetime type.

if your spec tests fails

Thursday, 18 February 2016

Redis Cheat Sheet

source: http://lzone.de/cheat-sheet/Redis


Redis Cheat Sheet

When you encounter a Redis instance and you quickly want to learn about the setup you just need a few simple commands to peak into the setup. Of course it doesn't hurt to look at the official full command documentation, but below is a listing just for sysadmins.

Accessing Redis

CLI

First thing to know is that you can use "telnet" (usually on default port 6397)
telnet localhost 6397
or the Redis CLI client
redis-cli
to connect to Redis. The advantage of redis-cli is that you have a help interface and command line history.

CLI Queries

Here is a short list of some basic data extraction commands:
TypeSyntax and Explanation
TracingWatch current live commands. Use this with care on production. Cancel with Ctrl-C.
monitor
Slow Queries
slowlog get 25  # print top 25 slow queries
slowlog len  
slowlog reset
Search Keys
keys pattern  # Find key matching exactly
keys pattern*  # Find keys matching in back
keys *pattern*  # Find keys matching somewhere
keys pattern*  # Find keys matching in front
On production servers use "KEYS" with care as it causes a full scan of all keys!
Generic
del <key>
dump <key>  # Serialize key
exists <key>
expire <key> <seconds>
Scalars
get <key> 
set <key> <value>
setnx <key> <value> # Set key value only if key does not exist
Batch commands:
mget <key> <key> ...
mset <key> <value> <key> <value> ...
Counter commands:
incr <key>
decr <key>
Lists
lrange <key> <start> <stop>
lrange mylist 0 -1  # Get all of a list
lindex mylist 5   # Get by index
llen mylist   # Get length

lpush mylist "value"
lpush mylist 5   
rpush mylist "value"

lpushx mylist 6   # Only push in mylist exists
rpushx mylist 0 

lpop mylist
rpop mylist

lrem mylist 1 "value"  # Remove 'value' count times
lset mylist 2 6   # mylist[2] = 6
ltrim <key> <start> <stop>
Hashes
hexists myhash field1  # Check if hash key exists

hget myhash field1
hdel myhash field2
hset myhash field1 "value"
hsetnx myhash field1 "value"

hgetall myhash
hkeys myhash
hlen myhash
Batch commands:
hmget <key> <key> ...
hmset <key> <value> <key> <value> ...
Counter commands
hincrby myhash field1 1
hincrby myhash field1 5
hincrby myhash field1 -1

hincrbrfloat myhash field2 1.123445 
SetsFIXME
Sorted SetsFIXME

CLI Scripting

For scripting just pass commands to "redis-cli". For example:
$ redis-cli INFO | grep connected
connected_clients:2
connected_slaves:0
$

Server Statistics

The statistics command is "INFO" and will give you an output as following.
$ redis-cli INFO
redis_version:2.2.12
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
process_id:8353
uptime_in_seconds:2592232
uptime_in_days:30
lru_clock:809325
used_cpu_sys:199.20
used_cpu_user:309.26
used_cpu_sys_children:12.04
used_cpu_user_children:1.47
connected_clients:2   # <---- connection count
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:6596112
used_memory_human:6.29M   # <---- memory usage
used_memory_rss:17571840
mem_fragmentation_ratio:2.66
use_tcmalloc:0
loading:0
aof_enabled:0
changes_since_last_save:0
bgsave_in_progress:0
last_save_time:1371241671
bgrewriteaof_in_progress:0
total_connections_received:118
total_commands_processed:1091
expired_keys:441
evicted_keys:0
keyspace_hits:6
keyspace_misses:1070
hash_max_zipmap_entries:512
hash_max_zipmap_value:64
pubsub_channels:0
pubsub_patterns:0
vm_enabled:0
role:master    # <---- master/slave in replication setup
db0:keys=91,expires=88

Changing Runtime Configuration

The command
CONFIG GET *
gives you a list of all active configuration variables you can change. The output might look like this:
redis 127.0.0.1:6379> CONFIG GET *
 1) "dir"
 2) "/var/lib/redis"
 3) "dbfilename"
 4) "dump.rdb"
 5) "requirepass"
 6) (nil)
 7) "masterauth"
 8) (nil)
 9) "maxmemory"
10) "0"
11) "maxmemory-policy"
12) "volatile-lru"
13) "maxmemory-samples"
14) "3"
15) "timeout"
16) "300"
17) "appendonly"
18) "no"
19) "no-appendfsync-on-rewrite"
20) "no"
21) "appendfsync"
22) "everysec"    # <---- how often fsync() is called
23) "save"
24) "900 1 300 10 60 10000"  # <---- how often Redis dumps in background
25) "slave-serve-stale-data"
26) "yes"
27) "hash-max-zipmap-entries"
28) "512"
29) "hash-max-zipmap-value"
30) "64"
31) "list-max-ziplist-entries"
32) "512"
33) "list-max-ziplist-value"
34) "64"
35) "set-max-intset-entries"
36) "512"
37) "slowlog-log-slower-than"
38) "10000"
39) "slowlog-max-len"
40) "64"
Note that keys and values are alternating and you can change each key by issuing a "CONFIG SET" command like:
CONFIG SET timeout 900
Such a change will be effective instantly. When changing values consider also updating the redis configuration file.

Databases

Multiple Databases

Redis has a concept of separated namespaces called "databases". You can select the database number you want to use with "SELECT". By default the database with index 0 is used. So issuing
redis 127.0.0.1:6379> SELECT 1
OK
redis 127.0.0.1:6379[1]>
switches to the second database. Note how the prompt changed and now has a "[1]" to indicate the database selection. To find out how many databases there are you might want to run redis-cli from the shell:
$ redis-cli INFO | grep ^db
db0:keys=91,expires=88
db1:keys=1,expires=0

Dropping Databases

To drop the currently selected database run
FLUSHDB
to drop all databases at once run
FLUSHALL

Replication

Checking for Replication

To see if the instance is a replication slave or master issue
redis 127.0.0.1:6379> INFO
[...]
role:master
and watch for the "role" line which shows either "master" or "slave". Starting with version 2.8 the "INFO" command also gives you per slave replication status looking like this
slave0:ip=127.0.0.1,port=6380,state=online,offset=281,lag=0

Setting up Replication

If you quickly need to set up replication just issue
SLAVEOF <IP> <port>
on a machine that you want to become slave of the given IP. It will immediately get values from the master. Note that this instance will still be writable. If you want it to be read-only change the redis config file (only available in most recent version, e.g. not on Debian). To revert the slave setting run
SLAVEOF NO ONE


Performance Testing

Benchmark

Install the Redis tools and run the provided benchmarking tool
redis-benchmark -h <host> [-p <port>]
If you are migrating from/to memcached protocol check out how to run the same benchmark for any key value store with memcached protocol.

Debugging Latency

First measure system latency on your Redis server with
redis-cli --intrinsic-latency 100
and then sample from your Redis clients with
redis-cli --latency -h <host> -p <port>
If you have problems with high latency check if transparent huge pages are disabled. Disable it with
echo never > /sys/kernel/mm/transparent_hugepage/enabled

Dump Database Backup

As Redis allows RDB database dumps in background, you can issue a dump at any time. Just run:
BGSAVE
When running this command Redis will fork and the new process will dump into the "dbfilename" configured in the Redis configuration without the original process being blocked. Of course the fork itself might cause an interruption. Use "LASTSAVE" to check when the dump file was last updated. For a simple backup solution just backup the dump file. If you need a synchronous save run "SAVE" instead of "BGSAVE".

Listing Connections

Starting with version 2.4 you can list connections with
CLIENT LIST
and you can terminate connections with
CLIENT KILL <IP>:<port>

Monitoring Traffic

The propably most useful command compared to memcached where you need to trace network traffic is the "MONITOR" command which will dump incoming commands in real time.
redis 127.0.0.1:6379> MONITOR
OK
1371241093.375324 "monitor"
1371241109.735725 "keys" "*"
1371241152.344504 "set" "testkey" "1"
1371241165.169184 "get" "testkey"
additionally use "SLOWLOG" to track the slowest queries in an interval. For example
SLOWLOG RESET
# wait for some time
SLOWLOG GET 25
and get the 25 slowest command during this time.

Friday, 12 February 2016

make sense?

Victor Nava You know a few months ago, I was working on a program to solve permutations and combinations.

This is a very interesting problem because you can think of many things in terms of combinations or sequences. Put this right thing in the right place and it makes something. Somehow, Music is a sequence of notes, books are sequences of letters, a painting is a sequence of material.

This somehow got me thinking about infinity. What is infinity, is the universe infinite? is anything infinite? is infinity possible?

Then I stopped thinking about that.

A few months later I was working on different program that generated drawings. A sequence of pixels on the screen.

Than I got thinking about combinations and infinity again. You know when you are not working you have time to think about silly things.

I asked my self, how is it possible that a program can make a drawing? well that's easy, it is possible because I am telling the program what to do. Cool. Then I thought what if the program could make drawings with random parameters until it makes one that I like? umm that sounds possible.

Than I though, what don't I just make a really dumb program that generates ALL of the possible drawings I can fit on my screen. Shit that would take a long time, but just for fun, how many drawings can in theory be drawn on my computer screen?

My initial answer was: a infinite number.

That can't be right, if the number of pixels on my screen is finite and the number of colors that each pixel can be in at any given time is finite, how is it possible to draw an infinite number of pictures?

Let's see, my screen has a resolution of 1200 by 900 pixels. That means that there are in total about 1 Million (1200 * 900) pixels. Each pixel can have lit in 16777216 different colors.

So how many drawing can be drawn on my screen?

The answer is simple and it comes from combinations.

An image is nothing more that a sequence of numbers, that represent pixels.
Each pixel can be of a particular color. That's it get the right pixel lit up with the right color and you get an picture.

To calculate the number of the possible pictures that can be shown on a screen you take the number of pixels and the number of colors that each pixel can be in and apply the mathematical formula to find the number of combinations of two sets.

The result is a very very very big number. However it is a finite number, which means you can only put so many different pictures on a particular screen.

Hold on, there are so many things I can draw on my screen.

Realising this made me a bit sad.

I started thinking about creativity and imagination and soon realised that this terms as romantic as they sound, obey the same rules. There are only so many things we can think of. Because the number of cells in our brains is finite and thoughts come from our brain, from the combination of signals from neurons firing messages to each other.

So in theory, you could simulate every single cell in a brain, and run a program that calculates every possible combination of connections, and messages. The same as with a computer screen.

This sound impossible, and it is at the moment.

To generate all of the possible pictures on my screen, it would take my computer an enormous amount time. But if you give it enough time it will eventually get there. Now, this is using current technology and dumb algorithms.

The other interesting question is, how many pictures that make sense can be shown on my screen?

Again the number is finite. Because this is a subset of all the possible pictures. So in theory it should be easier to find the answer, right?

No. Doing this is actually really hard. Because to do this you need to tell the computer what a picture that makes sense looks like. And we don't know how to describe that in words. We can tell what makes sense but we can't figure out the formula.

However, you can teach a computer how to find things that make sense by training it. Pretty much the same way you teach humans, by example. You show it a bunch pictures that make sense and a bunch that don't and let it know which one is wish. If you give it enough examples it can "learn" to recognise the pattern. This is already possible, the problem is how to find the right data.

But in theory you can take a computer programm and feed every single web page on the internet, every sms, facebook and whatsapp message, plus all the books ever written. And calculate the right combination of letters to form words, sentences and stories that make sense. Within the number of stories that make sense there will be many that will be superior to any other ever written.

Pause here for a moment. This programs are not taught how to write, they are taught how to recognise patterns that humans like and emulate them. To the computers this patterns are just numbers. Not pixels, or letters just numbers.

There are a finite number of letter in the alphabet and a finite number of letters you can put in a 400 pages of paper.

A book is a combination of sentences.
A sentence is a combination of words.
A word is a combination of letters.
All finite. And predictable.

Where the ideas to combine letters in an way that makes sense comes from is a mystery.

But when you can see all possibilities you don't need to imagine. You just need to select.

From all of the possible books that can be written lie the works of Dostoevsky. Computers will eventually generate all of them, plus the ones he didn't have time to write.

They will be able to imitate the style of writers. And combine styles from different people to form new styles. And we won't be able to tell the difference. Or maybe we will, we'll say things like: "this book is too good, it must have been generated by some stupid program".

to be continued...





Me:
Where the ideas to combine letters in an way that makes sense comes from is a mystery.

sense: A sense is a physiological capacity of organisms that provides data for perception. The senses and their operation, classification, and theory are overlapping topics studied by a variety of fields, most notably neuroscience, cognitive psychology (or cognitive science), and philosophy of perception

So: If we can map out what make sense to us and we can formalize this somehow so the computer can understand they we can solve this mystery

But how we can do this?

Sense: to me its just a sequence of the mind electricity transitions that its happening in our mind.
But what formula we can use here to save it when "something makes sense" to us so the computers can understand?

Another thought:
Is this the right approache? Or do we have to save to computer what are the facts about the elements around us and what is this that we want so a situation wil make sense. What i mean is:

Is it the "make sense" a process based on existing defined truths and the results that we want?

results = :foo

defined_truths = [4,2,3,5]

is the process make sense

defined_truths.include? results

the actually "include?" method when its happening do we do the same with our minds?

So: is it the "make sense" a sequence of examples that they can work? and it can aply to all the stuff in life?

results = i wanna fly
defined_truths = [gravity, weight]

the make sense

If i wanna fly the gravity should be less that x
Can i do this? yes ok makes sense

So for me its an array of examples that can be true to nature laws and the laws we define smile emoticon

and again it comes something else:

Who can define laws and why and how? Based again in a array of valid and accepted examples?

Big story and very interesting grin emoticon




i was walking now and i was thinking that we have to DEFINE everything to the computer so its common sense to them also.  What it is each Object and what aretheir attributes so we can compare select etc




# Define Music:
# Vocal or instrumental sounds (or both) combined in such a way as to produce beauty of form, harmony, and expression of emotion.
class Music
has_many :sounds
has_many :notes , through: :sounds
sequences_of_emotions = []
def good user
sequences_of_emotions.select { |e| e == user.definition_of(Music, :good) }
end
end
class Sound
has_many :notes
end
class Note
has_many :sequences
end
class Vocal < Sound
end
class User
DEFINITIONS = {
music: {
good: []
}
}
def definition_of klass, type
DEFINITIONS[klass.downcase][type]
end
end
# a user can say:
#
# Play some good music
#
# 1. play = action the computer has to execute
# 2. some = the range
# 3. good = a selection based on the user for the object(music)
# 4. music = the subject
# the only hard to explain here is the "good" because the good based on the expectation we want.
# what is good music? and what is good in general?
# good: to be desired or approved of.
# So we should define what is good for us based on the previus examples
# How can u define good? what music is do so will do "good" to me?
# IF: i accept these type of sequences and i will hear them it is good for me
view raw music.rb hosted with ❤ by GitHub

Tuesday, 9 February 2016

Stripe::CountrySpec

My contribution to stripe-gem





Now you can use


[4] pry(main)> Stripe::CountrySpec.retrieve("US")
#<Stripe::CountrySpec id=US 0x00000a> JSON: {
"id": "US",
"object": "country_spec",
"supported_bank_account_currencies": {
"usd": [
"US"
]
},
"supported_payment_currencies": [
"usd",
"aed",
"afn",
"..."
],
"supported_payment_methods": [
"alipay",
"card",
"stripe"
],
"verification_fields": {
"individual": {
"minimum": [
"external_account",
"legal_entity.address.city",
"legal_entity.address.line1",
"legal_entity.address.postal_code",
"legal_entity.address.state",
"legal_entity.dob.day",
"legal_entity.dob.month",
"legal_entity.dob.year",
"legal_entity.first_name",
"legal_entity.last_name",
"legal_entity.ssn_last_4",
"legal_entity.type",
"tos_acceptance.date",
"tos_acceptance.ip"
],
"additional": [
"legal_entity.personal_id_number",
"legal_entity.verification.document"
]
},
"company": {
"minimum": [
"external_account",
"legal_entity.address.city",
"legal_entity.address.line1",
"legal_entity.address.postal_code",
"legal_entity.address.state",
"legal_entity.business_name",
"legal_entity.business_tax_id",
"legal_entity.dob.day",
"legal_entity.dob.month",
"legal_entity.dob.year",
"legal_entity.first_name",
"legal_entity.last_name",
"legal_entity.ssn_last_4",
"legal_entity.type",
"tos_acceptance.date",
"tos_acceptance.ip"
],
"additional": [
"legal_entity.personal_id_number",
"legal_entity.verification.document"
]
}
}
}
view raw gistfile1.rb hosted with ❤ by GitHub

https://github.com/stripe/stripe-ruby/commit/f8532d225e20cb6ded2bd9a672a6d8a0479b80ba

Monday, 8 February 2016

ruby method chain from string

# key = "method.another_method"
#user.method.another_method = :foo
#amazing method chain :D
arry = key.split('.')
last = arry.pop
arry.inject(user, :send).send "#{last}=".to_sym, value
view raw gistfile1.txt hosted with ❤ by GitHub

Thursday, 4 February 2016

Restify CheatSheet

// Restify Server CheatSheet.
// More about the API: http://mcavage.me/node-restify/#server-api
// Install restify with npm install restify
// 1.1. Creating a Server.
// http://mcavage.me/node-restify/#Creating-a-Server
var restify = require('restify');
// A restify server has the following properties on it: name, version, log, acceptable, url.
// And the following methods: address(), listen(port, [host], [callback]), close(), pre(), use().
var server = restify.createServer({
certificate: null, // If you want to create an HTTPS server, pass in the PEM-encoded certificate and key
key: null, // If you want to create an HTTPS server, pass in the PEM-encoded certificate and key
formatters: null, // Custom response formatters for res.send()
log: null, // You can optionally pass in a bunyan instance; not required
name: 'node-api', // By default, this will be set in the Server response header, default is restify
spdy: null, // Any options accepted by node-spdy
version: '1.1.3', // A default version to set for all routes
handleUpgrades: false // Hook the upgrade event from the node HTTP server, pushing Connection: Upgrade requests through the regular request handling chain; defaults to false
});
server.listen(3000, function () {
console.log('%s listening at %s', server.name, server.url);
});
// You can change what headers restify sends by default by setting the top-level property defaultResponseHeaders. This should be a function that takes one argument data, which is the already serialized response body.
// data can be either a String or Buffer (or null). The this object will be the response itself.
restify.defaultResponseHeaders = function(data) {
this.header('Server', 'helloworld');
};
restify.defaultResponseHeaders = false; // disable altogether
// 1.2. Server API Event Emitters.
// http://mcavage.me/node-restify/#Server-API
// Restify servers emit all the events from the node http.Server and has several other events you want to listen on.
// http://nodejs.org/docs/latest/api/http.html#http_class_http_server
server.on('NotFound', function (request, response, cb) {}); // When a client request is sent for a URL that does not exist, restify will emit this event. Note that restify checks for listeners on this event, and if there are none, responds with a default 404 handler. It is expected that if you listen for this event, you respond to the client.
server.on('MethodNotAllowed', function (request, response, cb) {}); // When a client request is sent for a URL that does exist, but you have not registered a route for that HTTP verb, restify will emit this event. Note that restify checks for listeners on this event, and if there are none, responds with a default 405 handler. It is expected that if you listen for this event, you respond to the client.
server.on('VersionNotAllowed', function (request, response, cb) {}); // When a client request is sent for a route that exists, but does not match the version(s) on those routes, restify will emit this event. Note that restify checks for listeners on this event, and if there are none, responds with a default 400 handler. It is expected that if you listen for this event, you respond to the client.
server.on('UnsupportedMediaType', function (request, response, cb) {}); // When a client request is sent for a route that exist, but has a content-type mismatch, restify will emit this event. Note that restify checks for listeners on this event, and if there are none, responds with a default 415 handler. It is expected that if you listen for this event, you respond to the client.
server.on('after', function (request, response, route, error) {}); // Emitted after a route has finished all the handlers you registered. You can use this to write audit logs, etc. The route parameter will be the Route object that ran.
server.on('uncaughtException', function (request, response, route, error) {}); // Emitted when some handler throws an uncaughtException somewhere in the chain. The default behavior is to just call res.send(error), and let the built-ins in restify handle transforming, but you can override to whatever you want here.
// 1.3. Request API.
// Wraps all of the node http.IncomingMessage APIs, events and properties, plus the following.
// http://mcavage.me/node-restify/#Request-API
req.header(key, [defaultValue]); // Get the case-insensitive request header key, and optionally provide a default value (express-compliant).
req.accepts(type); // Check if the Accept header is present, and includes the given type.
req.is(type); // Check if the incoming request contains the Content-Type header field, and it contains the give mime type.
req.isSecure(); // Check if the incoming request is encrypted.
req.isChunked(); // Check if the incoming request is chunked.
req.isKeepAlive(); // Check if the incoming request is kept alive.
req.log; // Note that you can piggyback on the restify logging framework, by just using req.log
req.getLogger(component); // Shorthand to grab a new bunyan instance that is a child component of the one restify has.
req.time(); // The time when this request arrived (ms since epoch).
req.contentLength; // Short hand for the header content-length.
req.contentType; // Short hand for the header content-type.
req.href; // url.parse(req.url) href
req.log; // Bunyan logger you can piggyback on
req.id; // A unique request id (x-request-id)
req.path; // Cleaned up URL path
// 1.4. Response API.
// Wraps all of the node ServerResponse APIs, events and properties, plus the following.
// http://mcavage.me/node-restify/#Response-API
res.header(key, value); // Get or set the response header key.
res.charSet(type); // Appends the provided character set to the response's Content-Type.
res.cache([type], [options]); // Sets the cache-control header. type defaults to _public_, and options currently only takes maxAge.
res.status(code); // Sets the response statusCode.
res.send([status], body); // You can use send() to wrap up all the usual writeHead(), write(), end() calls on the HTTP API of node. You can pass send either a code and body, or just a body. body can be an Object, a Buffer, or an Error. When you call send(), restify figures out how to format the response (see content-negotiation, above), and does that.
res.json([status], body); // Short-hand for: res.contentType = 'json'; res.send({hello: 'world'});
res.code; // HTTP status code.
res.contentLength; // Short hand for the header content-length.
res.contentType; // Short hand for the header content-type.
res.headers; // Response headers.
res.id; // A unique request id (x-request-id).
// 2.1. Common Handlers.
// A restify server has a use() method that takes handlers of the form function (req, res, next). Note that restify runs handlers in the order they are registered on a server, so if you want some common handlers to run before any of your routes, issue calls to use() before defining routes. Note that in all calls to use() and the routes below, you can pass in any combination of direct functions (function(res, res, next)) and arrays of functions ([function(req, res, next)]).
// http://mcavage.me/node-restify/#Common-handlers:-server.use()
server.use(function slowHandler(req, res, next) {
setTimeout(function() {
return next();
}, 250);
});
// 2.2. Bundle Plugins.
// Also, restify ships with several handlers you can use.
// http://mcavage.me/node-restify/#Bundled-Plugins
server.use(restify.acceptParser(server.acceptable)); // Parses out the Accept header, and ensures that the server can respond to what the client asked for. You almost always want to just pass in server.acceptable here, as that's an array of content types the server knows how to respond to (with the formatters you've registered). If the request is for a non-handled type, this plugin will return an error of 406.
server.use(restify.authorizationParser()); // Parses out the Authorization header as best restify can. Currently only HTTP Basic Auth and HTTP Signature schemes are supported. When this is used, req.authorization will be set to something like:
server.use(restify.CORS()); // Supports tacking CORS headers into actual requests (as defined by the spec). Note that preflight requests are automatically handled by the router, and you can override the default behavior on a per-URL basis with server.opts(:url, ...).
server.use(restify.dateParser()); // Parses out the HTTP Date header (if present) and checks for clock skew (default allowed clock skew is 300s, like Kerberos). You can pass in a number, which is interpreted in seconds, to allow for clock skew.
server.use(restify.queryParser()); // Parses the HTTP query string (i.e., /foo?id=bar&name=mark). If you use this, the parsed content will always be available in req.query, additionally params are merged into req.params. You can disable by passing in mapParams: false in the options object.
server.use(restify.jsonp()); // Supports checking the query string for callback or jsonp and ensuring that the content-type is appropriately set if JSONP params are in place. There is also a default application/javascript formatter to handle this. You should set the queryParser plugin to run before this, but if you don't this plugin will still parse the query string properly.
server.use(restify.gzipResponse()); // If the client sends an accept-encoding: gzip header (or one with an appropriate q-val), then the server will automatically gzip all response data. Note that only gzip is supported, as this is most widely supported by clients in the wild.
server.use(restify.bodyParser()); // Blocks your chain on reading and parsing the HTTP request body. Switches on Content-Type and does the appropriate logic. application/json, application/x-www-form-urlencoded and multipart/form-data are currently supported.
server.use(restify.requestLogger()); // Sets up a child bunyan logger with the current request id filled in, along with any other parameters you define.
server.use(restify.throttle()); // Restify ships with a fairly comprehensive implementation of Token bucket, with the ability to throttle on IP (or x-forwarded-for) and username (from req.username). You define "global" request rate and burst rate, and you can define overrides for specific keys. Note that you can always place this on per-URL routes to enable different request rates to different resources (if for example, one route, like /my/slow/database is much easier to overwhlem than /my/fast/memcache).
server.use(restify.conditionalRequest()); // You can use this handler to let clients do nice HTTP semantics with the "match" headers. Specifically, with this plugin in place, you would set res.etag=$yourhashhere, and then this plugin will do one of: return 304 (Not Modified) [and stop the handler chain], return 412 (Precondition Failed) [and stop the handler chain], Allow the request to go through the handler chain.
server.use(restify.fullResponse()); // sets up all of the default headers for the system
server.use(restify.bodyParser()); // remaps the body content of a request to the req.params variable, allowing both GET and POST/PUT routes to use the same interface
// 3. Routing.
// http://mcavage.me/node-restify/#Routing
// You are responsible for calling next() in order to run the next handler in the chain.
function send(req, res, next) {
res.send('hello ' + req.params.name);
return next();
}
function rm(req, res, next) {
res.send(204);
return next('foo2');
}
server.post('/hello', send);
server.put('/hello', send);
server.get('/hello/:name', send);
server.head('/hello/:name', send);
server.del('hello/:name', rm);
// You can also pass in a RegExp object and access the capture group with req.params (which will not be interpreted in any way).
server.get(/^\/([a-zA-Z0-9_\.~-]+)\/(.*)/, function(req, res, next) {
console.log(req.params[0]);
console.log(req.params[1]);
res.send(200);
return next();
});
// You can pass in a string name to next(), and restify will lookup that route, and assuming it exists will run the chain from where you left off.
server.get({
name: 'foo2',
path: '/foo/:id'
}, function (req, res, next) {
assert.equal(count, 1);
res.send(200);
next();
});
// Routes can also accept more than one handler function.
server.get(
'/foo/:id',
function(req, res, next) {
console.log('Authenticate');
return next();
},
function(req, res, next) {
res.send(200);
return next();
}
);
// Most REST APIs tend to need versioning, and restify ships with support for semver versioning in an Accept-Version header, the same way you specify NPM version dependencies
var PATH = '/hello/:name';
server.get({path: PATH, version: '1.1.3'}, sendV1);
server.get({path: PATH, version: '2.0.0'}, sendV2);
// You can default the versions on routes by passing in a version field at server creation time. Lastly, you can support multiple versions in the API by using an array:
server.get({path: PATH, version: ['2.0.0', '2.1.0']}, sendV2);
// 4. Content Negotiation.
// http://mcavage.me/node-restify/#Content-Negotiation
// If you're using res.send() restify will automatically select the content-type to respond with, by finding the first registered formatter defined.
var server = restify.createServer({
formatters: {
'application/foo': function formatFoo(req, res, body) {
if (body instanceof Error)
return body.stack;
if (Buffer.isBuffer(body))
return body.toString('base64');
return util.inspect(body);
}
}
});
// Note that if a content-type can't be negotiated, the default is application/octet-stream. Of course, you can always explicitly set the content-type.
res.setHeader('content-type', 'application/foo');
res.send({hello: 'world'});
// You don't have to use any of this magic, as a restify response object has all the "raw" methods of a node ServerResponse on it as well.
var body = 'hello world';
res.writeHead(200, {
'Content-Length': Buffer.byteLength(body),
'Content-Type': 'text/plain'
});
res.write(body);
res.end();
// 5. Error Handling.
// http://mcavage.me/node-restify/#Error-handling
// If you invoke res.send() with an error that has a statusCode attribute, that will be used, otherwise a default of 500 will be used
// You can also shorthand this in a route by doing:
server.get('/hello/:name', function(req, res, next) {
return database.get(req.params.name, function(err, user) {
if (err)
return next(err);
res.send(user);
return next();
});
});
// Alternatively, restify 2.1 supports a next.ifError API
server.get('/hello/:name', function(req, res, next) {
return database.get(req.params.name, function(err, user) {
next.ifError(err);
res.send(user);
next();
});
});
// Trigger an HTTP error
// The built-in restify errors are: RestError, BadDigestError, BadMethodError, InternalError, InvalidArgumentError, InvalidContentError, InvalidCredentialsError, InvalidHeaderError, InvalidVersionError, MissingParameterError,
// NotAuthorizedError, RequestExpiredError, RequestThrottledError, ResourceNotFoundError, WrongAcceptError
// The core thing to note about an HttpError is that it has a numeric code (statusCode) and a body. The statusCode will automatically set the HTTP response status code, and the body attribute by default will be the message.
server.get('/hello/:name', function(req, res, next) {
return next(new restify.ConflictError("I just don't like you"));
});
server.get('/hello/:name', function(req, res, next) {
return next(new restify.errors.ConflictError("I just don't like you"));
});
server.get('/hello/:name', function(req, res, next) {
return next(new restify.InvalidArgumentError("I just don't like you"));
});
// You can always add your own by subclassing restify.RestError like:
var restify = require('restify');
var util = require('util');
function MyError(message) {
restify.RestError.call(this, {
restCode: 'MyError',
statusCode: 418,
message: message,
constructorOpt: MyError
});
this.name = 'MyError';
};
util.inherits(MyError, restify.RestError);
// 6. Socket.io.
// To use socket.io with restify, just treat your restify server as if it were a "raw" node server.
// http://mcavage.me/node-restify/#Socket.IO
var server = restify.createServer();
var io = socketio.listen(server);
server.get('/', function indexHTML(req, res, next) {
fs.readFile(__dirname + '/index.html', function (err, data) {
if (err) {
next(err);
return;
}
res.setHeader('Content-Type', 'text/html');
res.writeHead(200);
res.end(data);
next();
});
io.sockets.on('connection', function (socket) {
socket.emit('news', { hello: 'world' });
socket.on('my other event', function (data) {
console.log(data);
});
});
server.listen(8080, function () {
console.log('socket.io server listening at %s', server.url);
});
// Restify Client CheatSheet.
// More about the API: http://mcavage.me/node-restify/#client-api
// Install restify with npm install restify
var restify = require('restify');
// 1. JsonClient.
// Sends and expects application/json.
// http://mcavage.me/node-restify/#JsonClient
// options: accept, connectTimeout, requestTimeout, dtrace, gzip, headers, log, retry, signRequest, url, userAgent, version.
var client = restify.createJsonClient(options);
client.get(path, function(err, req, res, obj) {}); // Performs an HTTP get; if no payload was returned, obj defaults to {} for you (so you don't get a bunch of null pointer errors).
client.head(path, function(err, req, res) {}); // Just like get, but without obj.
client.post(path, object, function(err, req, res, obj) {}); // Takes a complete object to serialize and send to the server.
client.put(path, object, function(err, req, res, obj) {}); // Just like post.
client.del(path, function(err, req, res) {}); // del doesn't take content, since you know, it shouldn't.
// 2. StringClient.
// http://mcavage.me/node-restify/#StringClient
// options: accept, connectTimeout, requestTimeout, dtrace, gzip, headers, log, retry, signRequest, url, userAgent, version.
var client = restify.createStringClient(options);
client.get(path, function(err, req, res, data) {}); // Performs an HTTP get; if no payload was returned, data defaults to '' for you (so you don't get a bunch of null pointer errors).
client.head(path, function(err, req, res) {}); // Just like get, but without data.
client.post(path, object, function(err, req, res, data) {}); // Takes a complete object to serialize and send to the server.
client.put(path, object, function(err, req, res, data) {}); // Just like post.
client.del(path, function(err, req, res) {}); // del doesn't take content, since you know, it shouldn't.

Adams Heroku Values

Make it real

Ideas are cheap. Make a prototype, sketch a CLI session, draw a wireframe. Discuss around concrete examples, not hand-waving abstractions. Don't say you did something, provide a URL that proves it.

Ship it

Nothing is real until it's being used by a real user. This doesn't mean you make a prototype in the morning and blog about it in the evening. It means you find one person you believe your product will help and try to get them to use it.

Do it with style

Just because we're building bad-ass infrastructure and tools doesn't mean it can't be cool, stylish, and fun. Aesthetic matters. See The Substance of Style

Slick and fun meets powerful and serious.

Before Heroku (and a few others, like Github and Atlassian), developer-facing products were almost always stodgy, ugly, and completely lacking in style or fun.

We're part of the consumerization of IT.

Intuition-driven balances data-driven

Hunches guide you to places to create new value in the product. Users don't really know what they want. Creating products people loves requires treating product development as an art, not a science; but products have to solve real user problems. Understanding impact of product changes to existing product is best done by mining the data. When you have a mature product and many users you have lots of data on how they are using it. Use that data to make evidence-based decisions.

See: Inspired: Created Products People Love

Divide and conqueor

Big, hard problems become easy if you cut them into small pieces. How do you eat the elephant? One bite at a time. If a problems seems hard, think about how you can cut it into two smaller, easier problems. If one of those problems is still too hard, cut it in half again.

Wiggins' Law: If it's hard, cut scope.

Timing matters

If you're building something and just can't seem to get it right, maybe now isn't the right time. You learned something in the attempt, set it down for a while. Maybe in a few weeks or a few months you (or someone else) will pick it up again and find that the world has changed in a way that makes it the right time to build the thing.

Throw things away

It's not the code that is valuable, it's the understanding you've gained from building it. See James' startup school talk.

Never be afraid to throw something away and do it again, it will almost always be faster to build and much better the second (or third, or Nth) time around.

Machete design

Create a single, general-purpose tool which is simple to understand but can be applied to many problems. It's like the product version of occam's razor.

The value of a product is the number of problems it can solve divided by the amount of complexity the user needs to keep in their head to use it. Consider an iPhone vs a standard TV remove: an iPhone touchscreen can be used for countless different functions, but there's very little to remember about how it works (tap, drag, swipe, pinch). With a TV remove you have to remember what every button does; the more things you can use the remote for, the more buttons it has. We want to create iPhones, not TV remotes.

Small sharp tools

Composability. Simple tools which do one thing well and can be composed with other tools to create a nearly infinite number of results. For example, the unix methodology (stdin/stdout and pipes), see The Art of Unix Programming. Heroku examples include the add-ons API, logging/logplex, and procfile/the process model.

Small is beautiful. This isn't just tools, it's also teams. Several small, autonomous, focused teams working in concert almost always beat a single monolithic team.

Put it in the cloud

I don't want to run software, ever. Given a choice between a great app that runs locally and a mediocre app that runs in the cloud, i'll always take the latter. (e.g. excel vs google spreadsheet, 1password vs lastpass, Things vs a textfile todo list on Dropbox) Services, not software.

Results, not politics

You "get ahead" in your heroku career by delivering real value to customers and to the company, not by impressing your boss or with big talk.

Decision-making via ownership, not consensus or authority

Every product, feature, software component, web page, business deal, blog post, and so on should have a single owner. Many people may collaborate on it, but the owner is "the buck stops here" and makes the final call on what happens with the owned thing.

The owner can and should collect feedback from others, but feedback is just that: input that the owner might or might not choose to incorporate into their work. If something doesn't have an owner, no one should be working on it or trying to make decisions about it. Before those things can happen, it has to be owned.

Ownership can't be given, only taken. Ownership can't be declared, only demonstrated. Ownership begins with whoever creates the thing first. Later the owner may hand it off to someone else. If an item gets dropped for some reason (for example, the current owner switching teams or leaving the company), it's fair game for anyone else to pick up.

Apple's term for an owner is "directly responsible individual," or DRI.

Do-ocracy / intrapreneurship

Ask forgiveness, not permission.

Everything is an experiment

Anything we do -- a product, a feature, a standing meeting, an email campaign -- is always subject to change. That includes discontinuing or shutting down whatever the thing is. Ending an experiment isn't a failure, since we often learn the most from experiments that don't produce the results we wanted.

Own up to failure

Did you make a mistake by posting to the blog at the wrong time? By failing to document the feature before you shipped it? By screwing up a customer's app? By not respecting someone's ownership, or hurting someone's feelings?

Own it. Admit your mistake, say you're sorry (when applicable), and feel the failure to make sure you learned from it. Then, get back to work.

Gradual rollouts

Ease into everything. Use feature flags to activate people slowly into changes, then let it bake for a bit. Test out the message for a public launch by first sending it around internally, and later writing the private beta announcement. Collect feedback and adjust. By the time you're ready to take it public to a wide audience, you'll be fairly certain to have worked out all the kinks.

See: Crossing the Chasm

Design everything

Be intentional.

See:

Do less

Do we really need that feature? Can we delete that code? Do we really need that command? Can we outsource to or partner with another company so that we don't have the build and maintain something?

See: Ephemeralization

Question everything

The status quo is never good enough.

See:

Interfaces matter

Everything has an interface. A platform has an API. A computer has a keyboard, a mouse, and a GUI operating system.

Teams have interfaces too. How do you file a bug or make a request? Where and when does the team collaborate with any other team?

The two critical components of a good interface are that it be narrow and well-defined.

For example, the add-ons API is extremely narrow. Add-on providers only need implement two calls: one to provision a resource, and one to consume it.

The add-ons API is also well-defined, with an API spec and the kensa tool which runs a live test to verify the correctness of your implementation.

A poor interface is one that is wide and poorly-defined. For example, the way that apps interact with the operating system in traditional server-based hosting is poor. The number of ways the app can interact with the operating system -- system calls, libc, the entire filesystem, executing binaries in subshells -- is essentially infinite and impossible to specify.

See: Explicit Contracts

Names matter

Think careful about how something is named. Pick exactly one name for each concept the user needs to track, and use it consistently. For example, add-on providers are always called providers, never "vendor" or "partner" or anything else. Writing a glossary can be a good way to design the vocabulary around something.

Maniacal focus on simplicity

There is no step 1.

CLI 4 LIFE

Web UIs are great for many things, but command-line interfaces are the heart of developer workflows.

Ignore the competition (except to borrow good ideas)

Tim O'Reilly said it best.

Write well

Good writing is a powerful tool for communication. Clear writing is clear thinking.

See:

Strong opinions, weakly held

Have a strong opinion and argue passionately for it. But when you encounter new information, be willing to change your mind.

See: Strong Opinions, Weakly Held

Candor

Be blunt, honest, and truthful. Constructive criticism is the best kind. Avoid keeping quiet with your criticism about someone or something for the sake of politeness. Don't say something about someone to a third party that you wouldn't say to their face.

See: Winning

Programming literacy for all

Software is eating the world. Everyone can and should be able to write software in order to have a stake in the future.

See: End-user computing