Recent Posts

by John M Costa, III

Pragmatism for decision-making in Software Development

Overview

This post discusses pragmatism as a tool in Software Development. I consider myself to be pragmatic in my approach to software engineering, and wanted to explore the concept a little more.

What is pragmatism?

Pragmatism1 is a philosophical tradition that began in the United States around 1870. It is a way of thinking that focuses on the practical consequences of actions rather than on abstract principles. Pragmatists believe that the truth of an idea is determined by its usefulness in solving real-world problems. They are concerned with what works, rather than with what is theoretically correct.

Pragmatism in software development

Pragmatism can be a powerful tool in software development. Software projects can be complex, with lots of moving parts, and there’s lots of opportunity for things to go wrong, extending timelines and risking value delivery. Pragmatism can help you make decisions that are practical and effective, rather than getting bogged down in theoretical debates.

Some goals for pragmatism in software development can be distilled into the following:

  • deliver value to the customer (users and/or organization)
  • maximize stakeholder satisfaction
  • minimize stakeholder dissatisfaction

Looking around for inspiration

I looked around to see if there’s any existing work on pragmatism in software development. I found a few interesting papers, articles, and books that I wanted to share.

Optimization in Software Engineering - A Pragmatic Approach

Guenther Ruhe2 published a paper on Optimization in Software Engineering - A Pragmatic Approach3. The paper takes a process-based approach and includes a checklist for performing the optimization process. This process includes:

  • Scoping and Problem analysis
  • Modeling and Problem formulation
  • Solution Design
  • Data collection
  • Optimization
  • Validation
  • Implementation
  • Evaluation

The following describes each step in the process:

Scoping and Problem analysis

The first and most obvious step in the process is to ask if the problem can be solved easily. Easily solved problems help sidestep the need for additional time investment. When looking for an easy solution, consider alternatives as well.

Understand the stakeholders and decision makers around the problem. How important is this to them? How much time and effort can be invested in solving the problem? What’s the budget associated to solving the problem, considering both time and money?

Which approach best aligns with the business objectives and how would optimization benefit the problem context?

Modeling and Problem formulation

Depending on the complexity of the problem, work to break it down into smaller, more manageable parts. Identify key variables, constraints, and any dependencies.

Model project resourcing, budget and time for each phase of the project.

Identify technological constraints and dependencies.

Solution Design

Is there a solution already available that can be used as a baseline? If so, how does the proposed solution compare to the baseline?

What are the perceived expectations for the optimized approach?

Data collection

What data is available and what data is needed to solve the problem?

How reliable is the data? Is there a need for data cleaning?

Optimization

What settings are made and why? How do they vary?

Validation

What are the criteria for validation? How are they measured?

Do the stakeholders agree with the proposed solution?

Implementation

Is there anything that needs to be adjusted?

Evaluation

How much does the implemented solution solve the original problem and is acceptable by the stakeholders?

How much above the baseline does the implementation improve?

The Pragmatic Programmer

The Pragmatic Programmer is a book by Andrew Hunt and David Thomas. The book is a guide to software development that focuses on practical advice and best practices. The authors emphasize the importance of writing clean, maintainable code, and of being pragmatic in your approach to software development.

There are so many good nuggets in this book. If this isn’t already on your bookshelf, I highly recommend it.

Some of my favorites nuggets include:

  • Don’t live with broken windows
  • Good-Enough Software
  • Remember the bigger picture
  • Prototype to learn

Further reading on Pragmatism

Wrapping it up

I’m still exploring the concept of pragmatism in software development. Going beyond just pragmatism, I’m also interested in how to be a better product engineer4 while practicing pragmatism. I’m looking forward to sharing more on this topic in the future.

by John M Costa, III

Introducing RFCs to Share Ideas

Overview

There are a lot of positive benefits of being on a remote team. Finding ways to connect with your team and build relationships is important. One way to do this is to share your ideas and have discussions about various design topics. This is a great way to learn from your peers and to share your knowledge with them. It’s also a great way to build trust and a sense of community through the activity of writing and healthy discussion with your peers.

In addition to connecting with your team and building relationships, sharing your ideas is a great way towards building a writing culture. Writing is a great way to document your thought process and to share it with others. For me, writing is a way to become more concise with my thoughts and to be more clear in my communication.

One approach to sharing ideas is to write an RFC (Request for Comments). This is a document that outlines a problem and a proposed solution. It’s a great way to get feedback on your ideas and to build consensus around them.

What is an RFC?

Per Wikipedia1:

In 1969, Steve Croker invented the RFC system. He did this to create a way to record unofficial notes on the development of ARPANET. RFCs have since become official documents of internet specifications, communications protocols, procedures, and events.

There are so many great resources on how to write an RFC. This post from the Pragmatic Engineer is a great place to start and lists a lot of great resources on the topic.

I’ve come to know RFCs as a templated format for sharing ideas and seeking consensus. They range from very formal to informal. They can be used for a variety of things, such as proposing a new feature, discussing a problem, or documenting a decision.

How to Write an RFC

There are plenty of resources on how to write an RFC as they’ve been around for a while. Here are a few different formats I’ve come across and am interested in learning more about and trying:

Keep Track of Your RFCs

Keeping track of the status for each RFC is important. This could be as simple as a spreadsheet, a more formal system like GitHub issues. The idea is to have a way to track the status of each RFC and to make sure that they’re being reviewed and acted upon. Keep your RFCs organized and easy to find. This could be as simple as a folder in Google Drive or in a GitHub repository.

  • Draft
  • Review
  • Approved
  • Discarded
  • Deprecated

Sample RFC

Title: Using RFCs to Share Ideas
Authors:
John Costa

1 Executive Summary
The primary problem this RFC proposal is solving is how to arrive at a consensus. Documenting architecture decisions
would be done elseware.

2 Motivation
Often times there's a lot of great ideas that come up in discussions.  Unfortunately, these ideas never get documented
and are lost. Without a semi-formal process, it's easy for ideas to get lost. This is a great way to document your
thought process and to share it with others.

3 Proposed Implementation
The following proposal is a simplified version of a Request for Comment process based on the following published
resource. Much inspiration for this proposal and in some cases whole segments have been drawn from these resources:

* https://cwiki.apache.org/confluence/display/GEODE/Lightweight+RFC+Process
* https://philcalcado.com/2018/11/19/a_structured_rfc_process.html

Collaboration
Comments and feedback should be made in the RFC document itself. This way, all feedback is in one place and can be
easily referenced or referred to.

Authors must address all comments written by the deadline. This doesn't mean every comment and suggestion must be
accepted and incorporated, but they must be carefully read and responded to. Comments written after the deadline may be
addressed by the author, but they should be considered as a lower priority.

Every RFC has a lifecycle. The life cycle has the following phases:

* Draft: This is the initial state of the RFC, before the author(s) have started the discussion and are still working on the proposal.
* Review: This is the state where the RFC is being discussed and reviewed by the team.
* Approved: This is the state where the RFC has been approved and is ready to be implemented. It does not mean that the RFC is perfect, but that the team has reached a consensus that it is good enough to be implemented.
* Discarded: This is the state where the RFC has been discarded. This can happen for various reasons, such as the proposal being outdated, the team not reaching a consensus, or the proposal being too risky.
* Deprecated: This is the state where the RFC has been deprecated. This can happen when the proposal has been implemented and is no longer relevant, or when the proposal has been replaced by a better one.

Approval
The proposal should be posted with a date by which the author would like to see the approval decision to be made. How
much time is given to comment depends on the size and complexity of the proposed changes. Driving the actual decisions
should follow the lazy majority approach.

Blocking
If there are any blocking issues, the author should be able to escalate the issue to the team lead or the team. A block
should have a reason and, within a reasonable time frame, a solution should be proposed.

When to write an RFC?
Writing an RFC should be entirely voluntary. There is always the option of going straight to a pull request. However,
for larger changes, it might be wise to de-risk the risk of rejection of the pull request by first gathering input from
the team.

Immutability
Once approved the existing body of the RFC should remain immutable.

4 Metrics & Dashboards
There are no explicit metrics or dashboards for this proposal. The RFC process is a lightweight process that is meant to
be flexible and adaptable to the needs of the team.

5 Drawbacks
- Slow: The RFC process can take time
- Unpredictable: The rate of new RFCs is not controlled
- No backpressure: There is no mechanism to control the implementation of RFCs
- No explicit prioritization: RFCs are implicitly prioritized by teams, but this is not visible
- May clash with other processes: RFCs may not be needed for smaller things
- In corporate settings, the RFC process should have a decision-making process that is clear and transparent

6 Alternatives
- ADRs (Architecture Decision Records)
- Design Docs
- Hierarchical, democratic, or consensus-driven decision-making

7 Potential Impact and Dependencies
The desired impact of this proposal is to have a more structured way to share ideas and to build consensus around them.

8 Unresolved questions
- None

9 Conclusion
This RFC is a proposal for a lightweight RFC process and can be used for remote teams looking to build consensus around
ideas.

References

The following are some references that I’ve found useful in my research:

by John M Costa, III

Reviewing Code

Overview

Code reviews are a critical part of the software development process. They help to ensure that the code is of high quality, that it’s maintainable, and that it’s secure. They also help to ensure that the code is in line with the company’s goals and values. Code reviews are also a great way to learn from your peers and to share your knowledge with them.

Knowledge Sharing

The code review should be a learning opportunity for everyone involved, this could mean as part of the review or historically when looking back at motivations and decisions.

Higher Quality

The code review should ensure that the code is of high quality. This means that it should be free of errors and warnings, that it should run properly, and that it should accomplish the feature(s) it was designed to accomplish.

Better Maintainability

The code review should ensure that the code is maintainable. This means that it should be easy to read and understand, that it should be well-documented, and that it should follow coding and technical standards.

Increased Security

The code review should ensure that the code is secure. This means that it should be free of security vulnerabilities, that it should not introduce any new security vulnerabilities, and that it should follow security best practices.

Optimization Opportunities

The code review should consider if the code is efficient, not wasting resources, and is scaleable.

Assumptions

After the first pass through this blog post, I realized while writing this, there’s a few assumptions about the environment that I’m making.

One is that a version control system is being used and that the code is being reviewed in a pull request. This assumes healthy use of a version control system.

Another is that the code is being reviewed by teammates who you work closely with and that you trust to give and receive feedback and with positive intent.

Priorities

To follow the principles above, I try to review code with the following priorities in mind:

  1. Is the code functional?

The first thing I try to do is understand if it accomplishes the feature(s) it was designed to accomplish. As a reviewer, this could mean reading a README and running the code. When running the code, I try to capture not only the happy path but also the edge cases and error handling. As a submitter, this could mean providing these tools for the reviewer, ideally as unit tests and README documentation.

  1. Is the code clean and maintainable?

Secondly, I try to look at the code from cleanliness and maintainability perspective. To avoid as much subjectivity as possible, automated linters and static analysis tools should be used. In addition to these tools, the code should be well-documented, considering CSI (Comment Showing Intent)1 standards. The CSI Standard should exist alongside Self-Commenting2 Code practices, not instead of. The code should also have binaries and unnecessary cruft removed.

  1. Is the code secure?

Thirdly, I try to look at the code from a security perspective. Admittedly, this is an area I’m learning more about. With that said, I delegate much of this to automated tools which cover things like OWASP® Top 10 and CWE/SANS Top 25.

  1. Can the code be optimized?

Lastly, I try to look at the code from an optimization perspective. This means that the code should be efficient and not waste resources. It should also be scalable.

Design and architecture

Something I’ve been trying to do more of is using an RFCs (Request for Comments) ahead of writing code for larger changes. I think about the design and architecture of the code. This is a great way to get feedback on the design and approach well before the code is written. This is also a great way to get buy-in from the team on the approach.

Additional Considerations

Google’s Standard of Code Review mentions that the primary goal of the code review is to ensure that “the overall code health of Google’s codebase is improving over time”. This might be good for a big company like Google, but I feel that if you prioritize people over code, the code will naturally improve over time. This is why I like the idea of using code reviews as a learning and knowledge sharing opportunity.

Additionally, something that resonated with me from How to Do Code Reviews Like a Human (Part One), is that code reviews should be about the code, not the person. To help avoid some pitfalls use these techniques mentioned in the post:

  1. never say “you”
  2. Frame feedback as requests
  3. Tie notes to principles, not opinions.

Checklist

The following is a checklist that’s hopefully useful for pull requests. The idea is to use these to be consistent in process and should be applicable for both the openers and reviewers.

Checklist:

  • How

    • Does the code comply with the RFC (Request for Comments), if one exists?
    • Does the code accomplish the feature(s) it was designed to accomplish?
    • Is there documentation? README, CSI, Self-Commenting Code?
  • What

    • Are there tests? Do they cover the happy path, edge cases, and error handling?
    • Are linting and static analysis tools being used? If so, are they passing?
    • Are there any security vulnerabilities? Is the project up to date?
    • Are there any optimization opportunities?
      • Are there opportunities to reduce redundant code (DRY?
      • Does it follow SOLID principles?
  • Follow-Up/TODOs

    • Are there any follow-up items that could be addressed?
  • Feedback

    • Is the feedback framed as a request?
    • Is the feedback tied to principles, not opinions?
    • Does the feedback avoid using “you”?

References

by John M Costa, III

5x15 Reports to Advocate for the Work of Yourself, Project, or Team

Overview

I’ve found this less in smaller companies, but sometimes in larger companies, colleagues will take credit for the work of others. This is a toxic behavior that can lead to a lack of trust and a lack of collaboration. It’s important to recognize the work of others and to give credit where credit is due.

While this wouldn’t be the only reason for doing so, one of the solutions I’ve found to help erode the toxic behavior is to use 5x15 reports to advodacte for the work of one’s self, project, or team.

5x15

The 5x15 report is a weekly report that is sent to your manager. Yvon Chouinard, founder and CEO of outdoor equipment company Patagonia®, devised the 5-15 Report in the 1980s.1 As the name implies, it should take no longer than 5 minutes to read and no more than 15 minutes to write.

If you get a chance to read about Yvon Chouinard, you’ll find that he’s a very interesting person. He’s a rock climber, environmentalist, and a billionaire. He’s also the founder of Patagonia, a company that is known for its environmental advocacy.2

How to Write a 5x15 Report

The 5x15 report is a simple report that is sent to your manager. As mentioned, it should be no longer than 5 minutes to read and no more than 15 minutes to write. The report should include the following:

  1. Accomplishments: What you’ve accomplished in the past week.
  2. Priorities: What you plan to accomplish in the next week.
  3. Challenges: Any challenges you’re facing.
  4. Stats: Your personal stats for the week.

Accomplishments

The accomplishments section is where you can advocate for the work of yourself, your project, or your team. This is one of the most critical sections of the 5x15 report. It’s important to recognize the work of others and to give credit where credit is due, which includes crediting yourself, your project, or your team.

Depending on the size of your accomplishments, try to size them in terms of the impact they’ve had on the company. For example, if you’ve saved the company $100,000, then you should mention that in your report. If you don’t know the impact of your accomplishments, then you should try to find out. Perhaps put this in the challenges section of your report.

In addition to charting the impact of your accomplishments, you could also frame them in terms of the company’s goals and values. For example, if your company values Execution, then you could frame your accomplishments in terms of how they’ve helped the company execute on its goals. As an engineer, I sometimes forget about the soft skills that are required to be successful in the workplace. The 5x15 is a great way to highlight where you’ve used these soft skills to be successful.

Priorities

This should be a list of your priorities for the next week. This is a great way to set expectations with your manager and provide an opportunity to change the priorities if they’re not aligned with the company’s goals and values at that time.

In general, priorities shouldn’t change too much from week to week. If they do, then you should try to find out why.

Challenges

This isn’t a section to complain about your job. This is a section to highlight the challenges you’re facing and to provide an opportunity for your manager to help you overcome them. If you’re not facing any challenges, perhaps you’re not pushing yourself, project, or team. Try to provide potential solutions to the challenges you’re facing along with the challenges themselves. This will show that you’re proactive and that you’re thinking about how to overcome the challenges you’re facing without needing to be told what to do.

Stats

This is a section to provide your personal stats for the week. These stats should be meaningful to you and your manager and probably something like Energy, Credibility, Quality of Life, Skills, and Social Capital as resources. This should be your dashboard if you were to have one.3

Something new I’m thinking about trying is to include more insight into how I’m pacing myself. Stretching, Executing, Coasting could be three states of flow for the given week.4 Personally, I would find this useful to know if my reports where Stretching or Coasting as this would be a queue on whether they have additional capacity to take on more.

Follow-up

Once you’ve established the process of sending 5x15 reports, you should check in with your manager to see if they’re finding them useful. If they’re not, then you should try to find out why. If they are, then you should try to find out how you can make them more useful. Depending on the feedback, you might need to adjust the format or the content of the report.

Summary

The 5x15 report is a great way to communicate with your manager and can be used to set the agenda for your 1:1s. If it’s not already part of your company’s culture, then I would recommend trying to introduce it. It’s a great way to advocate for the work of yourself, your project, or your team.


  1. https://www.mindtools.com/aog8dj2/5-15-reports ↩︎

  2. https://en.wikipedia.org/wiki/Yvon_Chouinard ↩︎

  3. The Staff Engineer’s Path, Tanya Reilly, 2022, O’Reilly Media, Inc. p.121 ↩︎

  4. The Software Engineer’s Guidebook, Gergely Orosz, 2023, Pragmatic Engineer, p.32 ↩︎

by John M Costa, III

Kubernetes on DigitalOcean

Overview

Recently, I’ve been working on a project, a part of which is to deploy a Kubernetes cluster. I was hoping to document the process so that it could save some time for my future self and maybe others.

This post is the first in a series of posts which will document the process I went through to get a Kubernetes cluster up and running. In addition to documenting the process, I’ll be creating a repository which will contain the code I used to create the cluster. The repository is available here.

TLDR;

I’m using:

  • DigitalOcean to host my Kubernetes cluster
  • Terraform to manage the infrastructure
  • Spaces for object storage
  • tfenv to manage terraform versions
  • tgenv to manage terragrunt versions

Hosting Platform

Based on the cost estimates for what I was looking to do, I decided to go with DigitalOcean. I’ve used DigitalOcean in the past and have been happy with the service. I also like the simplicity of the platform and the user interface. More importantly, I like that they have a managed Kubernetes offering.

If you’d like to read more about the cost estimates for my project, you can read more about it here.

Kubernetes Cluster

Building up a kubernetes cluster is documented pretty thoroughly in the tutorials on DigitalOcean’s site1. After working through some of the setup steps, I realized that there could be a quicker way to get a cluster up and running using Terraform, by deferring the control plane setup to DigitalOcean. This would allow me to get a cluster up and running quickly, and then if it made sense I could work on automating the setup of the control plane later. It helps that they don’t charge for the control plane.

Infrastructure Management

Terraform is my go-to tool for infrastructure management. I’ve used it in the past to manage infrastructure on AWS, GCP, and DigitalOcean. Given my familiarity with the tool, I decided to use it to manage the infrastructure for my Kubernetes cluster.

Though there’s a kerfuffle with Hashicorp’s open source licencing2, I still decided to use Terraform, at least to start. I assume that there will be a migration path eventually to OpenToFu, but again I’d like to get up and running as fast as reasonable.

Spaces

One of the requirements to using terraform is that there needs to be a way to manage state of the remote objects. Keeping the state locally is not a good idea, as it can be lost or corrupted. Keeping the state in the cloud is a better.

Terraform keeps track of the state of the infrastructure it manages in a file, usualy named terraform.tfstate. This file is used to determine what changes need to be made to the infrastructure to bring it in line with the desired state.

Some resources already exist which walks through the setup34 of Spaces.

Spaces Setup

Digital Ocean has a pretty good tutorial on how to setup Spaces. I’ll walk through the steps I took to get it setup but if you’re new to DigitalOcean I’d recommend following their tutorial.5

As a quick overview, the steps are:

  1. Create a Space bucket in the console. This is typically a one time step depending on how you want to scale your projects. It’s as straighforward as setting the region and name of the space. I chose to use the default region of nyc3.

  2. Create a new Spaces Access Key and Secret. This is also a one time step assuming you back up your key. The access key is used to authenticate with the space.

Configuring Terraform to use Spaces

Once the space is set up, you’ll need to configure Terraform to use it. This is done by adding a backend configuration to the provider.tf file. The backend configuration tells Terraform where to store the state file. In this case, we’re telling Terraform to store the state file in the space we created earlier. A simple version of the configuration looks like this:

terraform {
  required_version = "~> v1.6.0"

  required_providers {
    digitalocean = {
      source = "digitalocean/digitalocean"
      version = "2.32.0"
    }
  }
}

variable "do_token" {}

provider "digitalocean" {
  token = var.do_token
  spaces_access_id  = "<access key>"
  spaces_secret_key = "<access key secret>"
}

In addition to the backend configuration, we also need to configure the DigitalOcean backend. The spaces access key and secret are used to authenticate with the space.

terraform {
    backend "s3" {
      key      = "<SPACES KEY>"
      bucket   = "<SPACES BUCKET>"
      region   = "nyc3"
      endpoints = { s3 = "https://nyc3.digitaloceanspaces.com" }

      encrypt                     = true

      # The following are currently required for Spaces
      # See: hashicorp/terraform#33983 and hashicorp/terraform#34086
      skip_region_validation      = true
      skip_credentials_validation = true
      skip_metadata_api_check     = true
      skip_requesting_account_id  = true
      skip_s3_checksum            = true
  }
}

Creating the cluster

Once the backend is configured, we can create the cluster. The cluster is created using the digitalocean_kubernetes_cluster resource. You’ll note that I’m glossing over some of the details in the configuration. I’ll go into more detail in a later post.

If you’re looking for a working example, you can find one in the terraform-digitalocean-kubernetes repository.

resource "digitalocean_kubernetes_cluster" "cluster" {
  name    = "<NAME>"
  region  = "<REGION>"
  version = "<VERSION>"

  # fixed node size
  node_pool {
    name       = "<POOL NAME>"
    size       = "<INSTANCE SIZE>"
    node_count = "<NODE COUNT>"
  }
}
by John M Costa, III

Kuberentes Hosting Services

Overview

When looking for a hosting platform for Kubernetes, I wanted to find a platform which was easy to use, had a good developer experience, and that was cost-effective. Easy to use is somewhat subjective and certainly depends on familiarity with the platform, domain knowledge, and other factors. Therefor, I’ll try to be as objective as possible when evaluating the platforms looking at Developer Experience and Cost Effectiveness.

For others, there could be other dimensions which are more important. For example, if you’re looking to meet certain compliance requirements, you might want to look at the security and compliance features of the platform and rate them accordingly.

For me and my project, these are not yet significant concerns.

Hosting Platform Options

An AI Assisted search via OpenAI’s ChatGPT1 for Kubernetes hosting platforms yields the following results:

Hosting ProviderCost EffectivenessDeveloper Experience
AWS- Components: EC2, S3, RDS, Lambda, etc.
- Pricing: Pay-as-you-go model, variable costs
- Productivity: High
- Impact: Broad range of services
- Satisfaction: Generally positive
Google Cloud- Components: Compute Engine, Cloud Storage, BigQuery, etc.
- Pricing: Sustained use discounts, per-minute billing
- Productivity: High
- Impact: Advanced AI and ML capabilities
- Satisfaction: Positive developer tools
DigitalOcean- Components: Droplets, Spaces, Databases, etc.
- Pricing: Simple and transparent pricing, fixed monthly costs
- Productivity: Moderate (simplified services)
- Impact: Suitable for smaller projects
- Satisfaction: Good user interface
Azure- Components: Virtual Machines, Blob Storage, Azure SQL Database, etc.
- Pricing: Flexible pricing options, Hybrid Benefit for Windows Server
- Productivity: High
- Impact: Integration with Microsoft products
- Satisfaction: Depends on familiarity with Microsoft ecosystem

Query:

create a markdown table which includes the following hosting providers:
AWS
Google Cloud
DigitalOcean
Azure

use the following columns so that each option could be evaluated:
- developer experience
- cost effectiveness

developer experience should include productivity, impact, satisfaction
cost effectiveness should include components and pricing for those components

Validating the Findings

Cost Effectiveness

The following are specifications for a development environment. The goal is to have a non-high availablilty Kubernetes cluster with 2 worker nodes intended for a development environment. The cluster should have a managed control plane and managed worker nodes, and should have object storage and load balancing. The cluster should also have a managed Kafka instance.

Pricing has been calculated generally using two worker nodes, and the cheapest option for the managed control plane.

Monthly Pricing (as of November 20232):

AspectAWS3Google Cloud4DigitalOcean5Azure6
Managed Control Plane73.00 USD73.00 USD00.00 USD73.00 USD
Managed Worker Nodes27.45 USD97.09 USD36.00 USD175.20 USD
Object Storage00.02 USD0.023 USD05.00 USD52.41 USD
Load Balancing31.03 USD18.27 USD12.00 USD23.25 USD
Managed Kafka86.58 USD31.13 USD15.00 USD10.95 USD
Managed Database69.15 USD25.55 USD15.00 USD24.82 USD
Total287.23 USD245.97 USD83.00 USD359.63 USD

Developer Experience

GitHub mentions Developer Experience (DevEx)7 as productivity, impact, and satisfaction. My thought is to document my experience so that other’s can evaluate the platforms for themselves.

Given the pricing schedule above, it’s not currently feasible for me to fully evaluate all the platforms at the same time. Instead, I’ll focus on the most cost-effective one, DigitalOcean. If given the opportunity and necessity, I’ll evaluate the other platforms in the future.

In a follow-up article, I’ll report my observations and experience. For now, I’ll leave this as a placeholder.

Thanks for reading!


  1. https://chat.openai.com/ ↩︎

  2. For expediency, I’ve tried to choose similar services across the platforms. A better evaluation might detail the precise specifications of each service. For expediency, I’ve chosen to leave out some of these details and could backfill them if they became more relevant. ↩︎

  3. https://calculator.aws/#/estimate. 2 t3a.small nodes as workers ↩︎

  4. https://cloud.google.com/products/calculator. 2 n1-standard-2 nodes as workers ↩︎

  5. https://www.digitalocean.com/pricing/. 2 Standard Droplets as workers, 1GB of object storage ↩︎

  6. https://azure.microsoft.com/en-au/pricing/calculator/↩︎

  7. https://github.blog/2023-06-08-developer-experience-what-is-it-and-why-should-you-care/ ↩︎

by John M Costa, III

Git Hooks with Pre-Commit Framework

Overview

Pre-commit is a framework for managing and maintaining multi-language pre-commit hooks. It is a great tool for ensuring consistency across a set of projects or a team. Not only can it help with consistency, but it can also help with formatting by automatically formatting files before they are committed.

What is a git hook?

Git hooks are scripts12 that run before or after certain git commands. They are stored in the .git/hooks directory of your repository. Git hooks are not stored in the repository itself, so they are not version controlled. This means that if you want to share a git hook with your team, you will need to share the script itself.

What is pre-commit?

Pre-commit solves the problem of sharing git hooks with your team and storing configurations with a project repository. This framework allows for management of git hooks commonly across any project.

Setting up pre-commit

Pre-commit is a python package that can be installed with pip. If you’re using macOS, you can install it with brew.

Install Configuration

Pre-commit uses a configuration file to determine which hooks to run and how to run them. This configuration file is stored in the root of your project and is named .pre-commit-config.yaml. This file is used to configure the hooks that will be run and the order in which they will be run.

To generate an initial version of the file, you can run pre-commit sample-config > .pre-commit-config.yaml. This will generate a sample configuration file with a few available hooks.

Once pre-commit is installed, you can run pre-commit install to install the git hooks. Now, when you run git commit, the hooks will run before the commit is created. If any of the hooks fail, the commit will be aborted.

Prescriptive Hook Choices

Pre-commit has a large number of hooks available. Some are more useful than others, most being language specific. Here’s a list of the hooks I like to use for every project.

repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v2.3.0
    hooks:
# file
      - id: end-of-file-fixer
        description: Fixes missing end-of-file newline in files.
      - id: mixed-line-ending
        args: ['--fix=lf']
        description: Forces to replace line ending by the UNIX 'lf' character.
      - id: trailing-whitespace
        description: Trims trailing whitespace.
      - id: check-added-large-files
        args: ['--maxkb=100']
        description: Checks for large files being added to git.
# format
      - id: check-yaml
        description: Checks that all yaml files are valid.
      - id: check-json
        description: Checks that all json files are valid.
      - id: check-toml
        description: Checks that all toml files are valid.

End of File Fixer

This hook ensures that all files have a newline at the end of the file. This is a common issue when working with multiple operating systems. Windows uses \r\n for newlines, while Linux and macOS use \n. This hook will ensure that all files have a newline at the end of the file.

Not having a newline isn’t just bad style, it can break some tools. 3 For example, if you have a file that contains the following (without end of file character):

first line
second line

now if you run wc -l file, you will get the following output, indicating only one line in the file.

% cat file
first line
second line
% wc -l file
       1 file

This is because of how POSIX defines a line. POSIX defines a line as a sequence of zero or more non-<newline>. 45

Mixed Line Ending

If you’re working in a mixed environment where developers are using different operating systems, this hook will ensure that all files have the same line endings. This hook will convert all line endings to the specified type. In the configuration above, I have it set to lf for Linux/Unix line endings as most (all?) of my software is intended to run on Linux or some Linux variant.

Trailing Whitespace

Whitespace differences can be picked up by source control systems and flagged as diffs, causing frustration for developers. This hook will remove all trailing whitespace from files making for a more consistent experience.

Check Added Large Files

Git is notorious for not handling large files well. There’s a bunch of information out there supporting this. This hook will check for large files being added to the repository.

Check YAML, JSON, TOML

Errant commas, missing quotes, and other syntax errors can be difficult to find in configuration files. These hooks will check for syntax errors in the specified file types.

Conclusion

Pre-commit is a great tool for ensuring consistency across a set of projects or a team. It can also help with formatting by automatically formatting files before they are committed. This can be especially useful when working with a team that has different preferences for formatting. Pre-commit can be used to ensure that all files are formatted consistently.

by John M Costa, III

5x15 Weekly Update and Coachee Checklist

Overview

After reading One Bold Move a Day I decided to create a checklist for my coaching interactions. This includes being coached as well as a template for those I plan to coach. This checklist is a work in progress and will be updated as I learn more about coaching and leadership.

The 5x15 Weekly Update 12

Something I’ve been doing for a while now has been to provide a weekly update to my manager. This update includes a list of wins and accolades. I’ve found this to be a great way to keep track of my accomplishments and to help me remember them when it comes time for my annual review. The gist is that you create an update to manage up which takes no longer than 15 minutes to create and no longer than 5 minutes to read.

You might find that you cover all this in your 1:1s with your manager. It may be that this is good enough for you or your manager. For other’s in organizations where there’s a lot of competition, writing this out on a weeky basis is a great way to advocate for yourself week over week and make writing your yearly review easier.

To build this out, I’ve decided to use the 5x15 format. Here’s an example template:

Name: <Your Name>
Week Ending: <Date>

## Are you planning to work next week, from <day> to </day>?

Yes. If no, why not?

## Accomplishments for the week:

- Project 1
   - Company's Culture
       - Organization's Culture: Culture Item 1
         - Team's Culture: Culture Item 1
            - My weekly contribution 1
            - My weekly contribution 2
            - My weekly contribution 3
         - B1: Culture Item 2
             - My weekly contribution 1
             - My weekly contribution 2
             - My weekly contribution 3
         - B1: Culture Item 3
       - Organization's Culture: Culture Item 2
         - Team's: Culture Item 1
         - My weekly contribution 1

## Priorities for next week:

- Priority 1
- Priority 2
- Priority 3

## Stats:
 - Energy level: low, medium, high + direction of change
 - QOL: low, medium, high + direction of change
 - Credibility: low, medium, high + direction of change

## Planned PTO:
  - <Date> - <Date>
  ...

## Examples, screenshots, etc..
  - example 1
  - example 2

By Section

Are you planning to work next week, from to ?

This helps your manager know if you’re planning to take time off. If you are, it shouldn’t be a surprise to them. As a manager, a gentle reminder about who will be unavailable can be helpful when you’re reflecting on the past week or looking forward to the next.

Accomplishments for the week

This is where you list your wins and accolades as organized by your company’s culture, your organization’s culture, and your team’s culture. Sometimes these items may not have alignment. This could be an opportunity to discuss this with your manager and see how better alignment could be achieved.

Not everyone’s comfortable with self-promotion. This is a great way to practice.

Priorities for next week

Keep this simple. List your top 3 priorities for the next week. This is a great way to keep your manager informed of what you’re working on and to help you stay focused on what’s important. If you’re not sure what your priorities are, this is a great opportunity to discuss this with your manager.

Stats

This is a great way to keep track vital stats of your work persona.

Energy level

“Different people are engergized or exhausted by different things.”3 This is a way to keep your manager informed of what’s going on in your work and/or your life. Good managers will use this information to help you be successful, perhaps providing the opportunity to coast during low energy times or to take on more challenging work during periods of high energy.

Quality of Life

How is the work/life balance? Are you feeling overwhelmed? Are you feeling bored? How are you enjoying your projects and the people you’re working with? Not everything is going to be perfect all the time. Often times we can’t change the situation, but we can change our perspective. Answers to how your mental or physical health could also be inputs here. Good managers often can help with challenging situations or provide perspective to get through them. Looking for the positive in a situation can help us get through times when QOL is lower.

Credibility

“You can build credibility by solving hard problems, being visibly competent and consistently showing good technical judgment.”4

The approach I take here is to determine how much trust others have in what I share for technical solutions. Some resources might also consider this to be part of Social Capital, but my feeling is that Credibility and Social Capital are so integral that these are parts of the same thing.

Checklist5

  • Keep a list of wins and accolades to help you remember your accomplishments. Do this daily.
  • Provide your manager with a 5x15 update every week. Include wins and accolades. This is a weekly practice.
  • Focus on what you can control and let go of what you can’t. This is a daily practice.
  • Use data to support your work and decisions. This is a daily practice.
  • Change your perspective. Look at the situation from a different angle. This is a daily practice.
  • Offer compassion to yourself and others. You don’t know what’s going in someone else’s life so give them some space for grace. This is a daily practice.
  • Measure your stats. This is a weekly practice.

Prompts3

  • What compliments do you hear frequently?
  • What projects bring you energy? When do you feel most fulfilled at work?
  • Do you feel like you have enough time to do your work at a level of quality that you’re proud of?
  • Are you finding that you have enough time for things outside of work that are important to you?
  • How are your peers receiving your work? Do you feel like you’re making a positive impact?

References


  1. Orosz, Gergely. The Software Engineer’s Guidebook: Navigating senior, techlead, and stagg engineer positions at tech companies and starups. (p. 38). Pragmatic Engineer BV, Amsterdam, Netherlands. ↩︎

  2. Reilly, Tanya. The Staff Engineer’s Path: A guide for individual contributors navigating growth and change. (p. 121). O’Reilly Media, Inc. ↩︎

  3. Reilly, Tanya. The Staff Engineer’s Path: A guide for individual contributors navigating growth and change. (p. 122). O’Reilly Media, Inc. ↩︎ ↩︎

  4. Reilly, Tanya. The Staff Engineer’s Path: A guide for individual contributors navigating growth and change. (p. 123). O’Reilly Media, Inc. ↩︎

  5. Hocking, Shanna A. One Bold Move a Day: Meaningful Actions Women can take to fulfill their Leadership and Career Potential (p. 29-32). McGraw-Hill ↩︎

by John M Costa, III

Scaling with GitHub Action Workflows

Overview

Platform engineering has become increasingly more popular in recent years. The idea of a platform team is to provide a set of tools and services that enable other teams to build and deploy their applications, ideally at scale. This allows teams to focus on their core competencies and not have to worry about the underlying infrastructure.

There’s plenty of great resources out there that go into detail about what a platform team is and how to build one.

At the core of any platform team is most likely an IDP, or internal developer portal. This is a place where developers can go to find documentation, guides, and other resources that will help them build and deploy their applications.

For a single developer, an internal developer portal is probably overkill. That said, there’s still concepts which can be applied to help scale development, if desired.

Scaling with GitHub Action Workflows.

In this post, I’ll be going over how I’ve used GitHub Actions to scale my development efforts, something I’ve become accustomed to for workflow standardization. I’m sure there’s optimizations that can be made, but this is what I’ve found to work for me right now.

The Problem

I’ve been working on a few projects recently that I’d like to have similar workflows, templates, and linting. After the third project, I realized that I was copying and pasting a lot of the same code over and over again. This is not ideal for a few reasons, but mainly if I want to make a change to the workflow, I would have to make the change in multiple places.

A Solution

There’s probably a few different solutions to this sort of problem. I decided to use GitHub Actions Workflows to solve it. I created a repository called template-repository and added a few workflows to it, like linting. I then created a new repository called workflow-templates and added a workflow which

1) check out the source repository, "template-repository"
2) check out the target repository
3) copy the workflows from the source repository to the target repository
4) commit and push the changes to the target repository
5) open a pull request for the changes

Here’s a version of the repository copy workflow:

name: Add linter to repository

permissions:
  pull-requests: write
  contents: write

on:
  workflow_dispatch:
    inputs:
      source_namespace:
        required: true
        type: string
        description: The namespace to copy the templates from.
        default: "johncosta"
      source_repository:
        required: true
        type: string
        description: The repository to copy the templates from.
        default: "template-repository"
      source_tag:
        required: true
        type: string
        description: The version tag to checkout for templates.
        default: v0.0.1
      target_namespace:
        required: true
        type: string
        description: The namespace to copy the templates to.
        default: "johncosta"
      target_repository:
        required: true
        type: choice
        description: The repository to copy the templates from.
        options:
          - johnmcostaiii.com
          - johnmcostaiii.net
          - smart-oil-api-python
          - U6143-ssd1306-golang
          - documentation
      target_tag:
        required: true
        type: string
        description: The version tag to checkout for templates.
        default: main
      committer_name:
        required: true
        type: string
        description: The users name to use for the commit.
        default: "John Costa"
      committer_email:
        required: true
        type: string
        description: The users email to use for the commit.
        default: "john.costa@gmail.com"

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          repository: ${{ github.event.inputs.source_namespace }}/${{ github.event.inputs.source_repository }}
          ref: ${{ github.event.inputs.source_tag }}
          path: ./src/${{ github.event.inputs.source_namespace }}/${{ github.event.inputs.source_repository }}

      - uses: actions/checkout@v4
        with:
          repository: ${{github.event.inputs.target_namespace}}/${{ github.event.inputs.target_repository }}
          ref: ${{ github.event.inputs.target_tag }}
          path: ${{github.workspace}}/src/${{github.event.inputs.target_namespace}}/${{ github.event.inputs.target_repository }}

      - name: Modify files
        run: |
          SOURCE_FOLDER=${{github.workspace}}/src/${{github.event.inputs.source_namespace}}/${{ github.event.inputs.source_repository }}
          TARGET_FOLDER=${{github.workspace}}/src/${{github.event.inputs.target_namespace}}/${{ github.event.inputs.target_repository }}
          TARGET_BRANCH="update-templates-${{ github.event.inputs.source_tag }}"

          # Copy the files from the source to the target
          cd ${TARGET_FOLDER}
          mkdir -p ${TARGET_FOLDER}/.github/linters
          mkdir -p ${TARGET_FOLDER}/.github/workflows
          cp -r ${SOURCE_FOLDER}/.github/linters ${TARGET_FOLDER}/.github/linters
          cp -r ${SOURCE_FOLDER}/.github/workflows/linter.yml ${TARGET_FOLDER}/.github/workflows/linter.yml          

      - name: Create Pull Request
        uses: peter-evans/create-pull-request@v5
        with:
          token: ${{ secrets.ACCESS_TOKEN }}
          path: ${{github.workspace}}/src/${{github.event.inputs.target_namespace}}/${{ github.event.inputs.target_repository }}
          title: "chore: update linter workflow to ${{ github.event.inputs.source_tag }}"
          commit-message: "chore: linter workflow to ${{ github.event.inputs.source_tag }}"
          base: "main"
          branch: "update-linter-workflows-${{ github.event.inputs.source_tag }}"

You’ll notice that I’m using the peter-evans/create-pull-request action to create the pull request. This is a great action which helps both commit the changes and open a pull request for them.

To make this workflow work, I had to create a personal access token with the pull-requests: write and contents: write permissions. I then added the token as a secret to the repository.

Lastly, this is a workflow dispatch workflow, which means that it can be triggered manually. This is great because it allows me to trigger the workflow whenever I want to update the workflows in a repository. To ensure that I don’t point to the wrong repository, I’ve added a few input parameters to the workflow. This allows me to specify the source and target repositories, as well as the source and target tags. This is useful because I can point to a specific version of the source repository, and then update the target repository to use that version.

Conclusion

This is just one example of how I’ve used GitHub Actions to scale my development efforts. I’m sure there’s other ways to do this, but this is what I’ve found to work for me right now. I’m sure there’s optimizations that can be made, and I’m always looking for feedback. Feel free to reach out to me on Twitter or in the comments below. Thanks for reading!