Why I Moved to Bangalore

I’ve always thought of myself as one of, if not the hardest working people I know. Definitely not the most productive, but I try. I’ve sacrificed a lot over the years doing startups and I finally feel like I have taken the biggest plunge you can take in the startup game, moving to Bangalore India. On the spectrum of things you can do to execute on a startup, this is definitely the most hardcore move you can make. Being from the bay area, I knew I would be in for some culture shock and moving here has not disappointed.

Setting aside how you can see all the standard farm animals in the middle of an urban environment (saw a family of pigs today walking down the street on my way to work), this place definitely has it’s challenges but I want to talk about why I did it and there really is only one reason, to build a committed team that I can scale.

The most obvious variable here is salaries which are obviously lower but honestly, thats not the whole reason I am here. It is not about paying someone less but about paying someone more than they are used to so that there is no way they would leave your company. I am SOOOO tired of having to worry about recruiters poaching my employees like has happened in every startup I have done. Retention is such a struggle in the states and when you aren’t fully VC backed, then its near impossible to find good people who will stay working for you long term on startup wages + equity. I’m always using a bleeding edge stack so no matter what, my employees get leveled up and eventually will 100% be getting attractive deals that I might not be able to compete with. Losing critical employees early in a startup is such a massive blow and I feel like out here, I can always stay competitive while also not having to play money ball all the time when doing recruiting. I can actually get solid people with experience and be able to always give them a raise / bonus when they deserve it.

I sort of feel like I am a modern day colonialist out here but at the same time I know I am also going to be providing good opportunities to those I work with. Really hoping this one works out but if it doesn’t, will for sure have some good stories to tell from the whole experience. Like the time I saw all the pigs walking down a busy street causing a traffic jam. That one just got me how people weave in and out of the cows and pigs that roam free. Not used to that yet.

Buying a Lottery Ticket with Tackle

When I started writing tackle-box, I never thought it would actually turn into an actual language but that is how it has evolved. Initially I was just trying to learn how to code and didn’t know how to properly contribute my changes back to the original project I was modifying (cookiecutter). The changes grew too great so when I then tried to PR them back to the original code base, maintainers were like “nice work but we can’t do anything with that”. Fast forward almost two years and now the project has evolved into a full blown programming language.

Dennis Ritchie, the creator of C, once said about creating languages, “It’s a lottery, and some can buy a lot of the tickets. There are plenty of beautiful languages (more beautiful than C) that didn’t catch on. But someone does win the lottery, and doing a language at least teaches you something.” Thats basically where I am at, someone who bought a ticket and might have something more elegant than what is around but has a slim chance of actually catching on. That being said, I did a number cool things not normal in other languages.

  1. My language is serializable - ie the whole thing can be expressed in yaml / json which is super useful for a variety of config management tasks
  2. It is the first serializable language (as far as I can tell) that is turing complete with loops, conditionals, and branching
  3. It can easily be embedded within configuration files like yaml, json, toml, and init files
  4. Its simpler to understand than any other language of its type out there.

To be clear, it isn’t something to build anything super complicated but I can imagine there being a community built around building small building blocks for code generators. I can also see people (me) building cool reference implementations and using tackle to wrap various tools like kubectl and helm to manage clusters.

Anyways, we’ll see where this goes.

No Idiomatic Dev Ops

Recently I heard something that resonated, that building idiomatic dev ops is still an undefined problem. There are a lot of great solutions out there for building architectures but very few examples of these tools actually being used in the wild. Almost all the implementations are reference architectures because typically the infrastructure side of a company has no chance of ever being open sourced for too many reasons to get into. But that is not really the case in the blockchain field where I work where not only is open source the status quo, but the value of open sourcing dev ops source translates well across individual deployments. This is because 1, everyone is trying to manage the same application and 2, all the applications (ie eth and other POS networks) typically behave the same (checkout kotal).

Anyways, idiomatic dev ops is still not a thing and there are a lot of smart people working on it but I think the biggest blocker at the moment is this idea.

  • Most dev ops applications are simply ways of getting config files into target applications
    • See helm, helmfile, terragrunt (more on this one later), etc
  • Applications typically don’t care about anything past what exactly they are being fed
  • This creates a situation where you have a lot of great tools but it is on the user to actually get it working well within their environment’s specifications
    • For instance if you have multiple envs like most do, you typically need to engineer how these envs have shared attributes and individual ones
  • Brass tax - everything ends up in yaml or some file like it which is static and so workflows have to resolve to something like that

For these issues, tackle-box is going to crush for a couple reasons:

  • It is free form as in the user has the perfect tooling to implement the way that they want to implement their custom dry solution. There is no one size fits all but given a little bit of tackle knowledge, building something that is dry is not hard
  • All the tools that people use (ie terraform, ansible, helm, helmfile etc) have a schema that you can simply code generate or even better create an abstract interface to implement
    • Code generation is pretty straightforward in that you make a tackle that generates the files needed and then hook into something that can call them (open source examples for all major tools coming)
    • Implementing an abstract interface is another story…

That is what this blog is going to really be about, how to implement a generic abstract interface to couple all the other tools together. Obviously I wouldn’t be writing this if it wasn’t for tackle which I see as a potential solution for this challenge but to be real, I haven’t seen anyone actually formalize this challenge in this light yet but I think this is the right framing. How exactly do you knit together your favorite tool / process to generate a config file that is callable by you favorite dev ops tool.

This will have to be a multi-part thing as frankly, it is hard enough to find inspiration to get my day to day work done, let alone craft compelling stories that promote tackle box. Shit is still a WIP and my brain is jelly most of the time.

Jobby jobs

So last month I got let go from my prior employer along with my whole team. While it was a little bit of a jarring event, all my employees within a month settled in good jobs and I am personally taking care of my prior contracts through my own business. Those engagements were relying on me and while my team vaporized, I am taking on the load doing all the dev / ops work to support the deploying multiple k8s clusters in a new cloud provider dealing with 2 environments, 2 regions, 10 / 5 services x2 across 4 networks. Been spending the last two weeks staring at grafana trying to de-bottleneck this new stack that I have now got my throughput to the point that I can index the ICON blockchain through kafka in a reasonable amount of time. This amounts to about 2k k8s manifests which I am all code generating with tackle. Interviewing new employees in India and planning on moving out there to be close to a team that I am lining up with a long term contract to support. Pretty hectic working non-stop. Literally waking up every morning, getting to desk, working till I go to sleep, weeks on end. This is basically what I have been doing for years but this is on turbo. Tackle is about to be released and it is all happening at once.

release

Damn it has been years since I have been working on tackle box but it is finally ready to be released. I have worked on this thing almost every weekend for the last two years as I knew there were some patterns at play that simply made sense. When my last girlfriend broke up with me, tackle box was distinctly one of those reasons. I’m ok with it though as tackle box came out better than I could have ever imagined, despite me thinking that at one point I was going to marry that girl.

One thing that I still don’t have a grasp of is what type of language it is. Is it a programming language? I honestly don’t know but it has many of the characteristics. Anyways, this is something to write another blog about. What did I actually build? Fuck knows, shit is dope.

Rebuilding tackle box

So I relatively recently decided to give up 1:1 support for rendering cookiecutter templates and decided to rewrite my code. Cookiecutter is the parent project I built tackle-box out of and I thought it would have been really cool to be able to get all those CC users over to TB by supporting their templates. In the end I feel like TB is better by being it’s own thing. Don’t want to bemoan all the reasons behind the decision (there are a lot), but now I am dealing with a lot of interesting scenarios as a couple things were enabled by making this switch.

  1. New syntax

Before I used the key type to indicate the calling of a hook which I thought was A, verbose and B, not very expressive leaving no room for syntactic sugar. I have hence switched it to an arrow based scheme as below.

Old

a_key:
  type: print
  input: A thing to print

New

a_key:
  ->: print
  input: A thing to print

I also added the notion of arguments and single level calls like so

a_key:
  ->: print A thing to print
# Is the same as
b_key->: print A thing to print

This allows two things, the notion of what the arrow looks like (ie -> / _> / <- / <_). Public and private hook calls look like -> / _> respectively, private being the output is available for successive rendering / calls but not output in context. I am also contemplating the usefulness of the notion of functions which would be writen in tackle files with the <- / <_ arrows and made available as reusable components within scripts.

  1. Can parse arbitrary documents

CC had you lay everything out on one level of key value pairs which made nesting logic somewhat complicated. Before the rebuild, TB had a notion of a block hook which was a collection of other hooks that allowed users to condition their execution or loop through them based on normal hook logic. For instance this is old:

some_key:
  type: block
  when: some_var == 'something'
  items:
    stuff: things
    a_hook:
      type: another_hook

And now it is like this:

some_key->:
  if: some_var == 'something'
  stuff: things
  a_hook:
    ->: another_hook

This is handled under the hood with a macro that rewrites the input so that common things like the block hook are available.

  1. Every hook is renderable

In jinja, there are things called extensions and filters which can be applied in via {{ some_extension(vars) }} or {{ vars | some_filter }} respectively. Now that each hook has the notion of a positional argument, all hooks can be called when rendering. Super helpful when you want to string together a number of hooks to get at something. For instance this would work to dig out the vairable out of a yaml file:

worker_image_tag: "{{yaml(path_join([cluster_env,join([network_name,'-',network_version]),item,'values.yaml'])).worker}}"
  1. Functions

Will need to devote a whole blog series to that. Strongly typed tackle…

Theres more, but I need to just write it all up in an actual medium series.

hiatus

Been a minute since I blogged but figured it was time to start writing. Been a tumultuous year since I last blogged. Founded a blockchain startup, moved to two different states, but most relevantly through long hours on the weekends am seeing the light at the end of the tunnel for finishing up tackle-box. Tackle box has been good enough to release for about a year but I have been super apprehensive to do so as once I do release it, I am bound to maintain the syntax and since it’s success depends on community adoption, the last thing I want to do in this case is release a half finished product. That was my thought at least 8 months ago and now it is finally paying off. I had a little inspiration about a month ago to just say fuck it, and put in all the features I was dreaming about. These included:

  1. A new callout for a tackle hooks which are keys ending with ->. Before tackle always looked for keys called type which I thought had a couple problems. First it is verbose, second it conflicts with keys that people might use.
  1. Being able to support args and kwarg cli arguments. These I thought were the key to being able to get really compact syntax along with the ability to use the tool as a declarative CLI. From the command line you can now do:
tackle thing.yaml arg arg_n --kwarg=kwarg --kwarg_n=kwarg_n

Or from a yaml you can do:

key->: print this and that

Basically the semantics of how to language interprets args, kwargs, and flags is the same as how it is expressed internally within the syntax as it is when you call it from the command line.

Many other features I am thinking about though so we’ll see when this thing actually gets released. I don’t want to deal with a community until it is actually ready. Changing the syntax later is going to be a nightmare for users so best I get it all settled right now.

Docker Compose vs SystemD for One-Clicks

I had been struggling for a while over how to best configure VMs for my one-clicks. SystemD is fantastic but simply put, nobody seems to be contributing much SystemD items these days as everything is docker. It’s not like I am late to the docker party, I’m fully bought in. But for a while when running VMs I was just trying to build vanilla VMs which meant SystemD.

I am now off the SystemD train and fully bought into docker for as much as it makes sense. I have now transitioned my one-clicks from just Terraform -> Ansible -> SystemD to Terraform -> Ansble -> Docker Compose.

For an example, this stack took me 3 hours to build with docker-compose,

Where as this stack took me several days with all the debugging…

I think the real key is just getting systemd to play nice with docker-compose. Have yet to figure out best way of doing that.

Starting to really hate VMs…

Out with Nukikata, Now it’s Tackle Box

So as cool as nukikata was as a name, nobody could remember it and nobody cares what it translates to in Japanese. So I set out to find a new name. Meditated hard on it and went through some iterations. Got stuck on kitchen themed things and instead started to think about the ocean. Lots of cool names in the ocean like boats (docker) and whales (kubernetes). But then I remembered when I almost named my plugins hooks and then there it is, what is something that holds hooks? A tackle box.

Well there it is, tackle box.

Few features in the works.

  1. No more {{ cookiecutter.foo }} -> supports legacy and {{ foo }}
  2. Ability to record a series of options to a file, then easily include that output as a fixture for tests. This was a major pain with my first soiree with building modules. If you can’t test it, it’s probably broken. These options are now exposed as modes similar to the legacy replay option. Calling the new modes, record -> a way to output a run and dump output in the event of failure and rerun -> a way to run without inputs from a saved output (ie the output of record).
  3. Simpler logic with the input dicts. The variable names were so confusing for a while and so I just renamed them. This was part of refactoring all the inputs into pydantic objects. Will be helpful if I want to expose anything as an API but really just gave me out of the box data validation. New features don’t need variables piped from one end of the program to the other. Much cleaner and easier to understand. Still working out tests as they all broke….
  4. Providers with their own requirements. Since tackle is supposed to wrap other libraries, it needs a way of lazily importing modules. Did the POC and just need to integrate it.

As soon as I am done with #4, I’ll write the medium and officially launch the project.

Back to blogging, this time about Nuki

So it is time I get back on this. Number of things have changed since I was last active. Started working full time with Insight and building out a collection of different terraform deployments for blockchain networks.

Biggest thing going on is Nuki though which I’m excited to start actively blogging about. It is a fork of cookiecutter but made a ton of changes that have opened a whole realm of new possibilities beyond code generation. Specifically, I am now building out my own DSL based on yaml that has a lot of turing-like capabilities. The whole thing is modular and plugins based where the DSL is only supposed to wrap plugins with some simple helper utilities like loop and when. Also have some branching capabilities but not quite there yet. Just got pydantic integrated and stuck on a few bugs. Planning on building providers for each plugin which will then make it ready for promotion and getting open source contributors.

Anyways, will be blogging more as new updates get pushed. Just about to declar independence from cookiecutter by doing some massive refactors that will basically make it it’s own project and hence, deserves it’s own un-forked repo.

Also updated this blog’s deployment. Now using this terraform module to deploy the s3 and cloudfront. Much cleaner now.

Anyways, will update more soon once nuki starts rolling.