Run an Ubuntu VM on an Apple M1 Mac

The simplest way to run an Ubuntu Linux virtual machine on a new Apple M1 chip macOS machine is to use Multipass.

First, install Multipass:

$ brew install --cask multipass

There should be a primary VM instance available already.

To start the primary Ubuntu instance:

$ multipass start

It should show:

Starting primary

Show running instances:

$ multipass list

The output should be similar to:

Name    State   IPv4         Image
primary Running 192.168.64.2 Ubuntu 20.04 LTS

Now we can SSH into the VM using:

$ multipass shell

You should see a prompt for the shell inside the Ubuntu VM:

ubuntu@primary:~$

Your Ubuntu VM is now ready to use!

Additionally, to mount a directory to access files from the host machine inside the VM, see this post:

Mount a Host Machine Directory Inside a Multipass VM

Mount a Host Machine Directory Inside a Multipass VM

To be able to access files on a host machine from a Multipass Ubuntu VM, we can mount the local directory into the virtual machine.

Assuming we have a host system directory my-stuff, we can mount it into the VM with:

$ multipass mount my-stuff primary:/home/ubuntu/my-stuff

Note the required name primary which refers to the VM instance name.

Check that the mount shows up correctly in Multipass:

$ multipass info primary

This should show your local and VM directory, with a line similar to:

Mounts: /Users/abc/Documents/my-stuff => /home/ubuntu/my-stuff

SSH into the VM to see the directory:

$ multipass shell
ubuntu@primary:~$ ls my-stuff
file1
file2
...

The directory should now be accessible inside the VM and we can share files between the host machine and VM.

Related Posts

Run an Ubuntu VM on an Apple M1 Mac

Rename a Branch in Git

Sometimes we need to rename an existing Git branch without creating a new branch or removing the old branch.

First, make sure you have the existing branch to rename checked out:

$ git branch

Output:

main
* old-name

To rename the branch use:

$ git branch -m old-name new-name

The command ‘m’ is short for “move”, similar to moving a file to rename it in Unix systems.

Confirm the new name:

$ git branch

Output:

main
* new-name

Open Visual Studio Code from the Terminal on macOS

It is useful to be able to open VSCode from the command line with the code command.

To add this ability, edit your PATH as follows:

PATH=$PATH:/Applications/Visual\ Studio\ Code.app/Contents/Resources/app/bin

Add this to your .bash_profile or equivalent.

This will ensure your shell will find the code binary from VSCode while in any directory.

Now you can open any file in your current working directory with VSCode using:

$ code file.js

Or just open the editor without a file in the current directory:

$ code .

 

Include the Same Query More than Once in a GraphQL Request

It can sometimes be useful to request two or more copies of the same query result in one GraphQL request.

We can repeat the same query twice, for example, using output naming as in the example below:

{
  getBook {
    title
  }

  secondCopy:getBook {
    title
  }
}

The label secondCopy is required to create a unique name in the output data.

The label we use will replace the query name in the output response, as below:

{
  "data": {
    "getBook": {
      "title": "Book A"
    },
    "secondCopy": {
      "title": "Book A"
    }
  }
}

We can request as many copies as desired in the query.

 

Convert OpenAPI YAML File to JSON

We can convert an OpenAPI (or Swagger) specification file into JSON using the yamljs utility.
We can install the binary globally command using:

$ npm install -g yamljs

This should make yaml2json available in the shell. We can then run:

$ yaml2json input.yaml -i4 -p > output.json

The output file is the JSON equivalent of the YAML spec.

The -p param means “pretty” and “-i4” means indentation of 4 spaces.

References

https://www.npmjs.com/package/yamljs

 

 

Create an HTML Test Coverage Report in Go

We can generate a unit test coverage report in Golang using the following commands and tools which are included with the language.

To run unit tests with coverage collection use:

go test -covermode=count -coverpkg=./... -coverprofile cover.out -v ./...

The coverpkg parameter is needed for the code coverage to include packages and not just the top level files in the project directory.

Generate the visual coverage HTML files using:

go tool cover -html cover.out -o cover.html

If you are using a Makefile on macOS we can add a command to run all of this together in order and open the page in a browser:

test-coverage:
  go test -v ./... -covermode=count -coverpkg=./... -coverprofile coverage/coverage.out
  go tool cover -html coverage/coverage.out -o coverage/coverage.html
  open coverage/coverage.html

Then run:

make test-coverage

This will open the very nice coverage report HTML page in a browser.

 

Batch Resources Endpoint in REST API Design

Sometimes in a REST API we want a single endpoint to be able to return multiple types of resources at once.

Some names for this are a batch resources endpoint or simply batch endpoint; another name is bulk endpoint.

The endpoint itself could use the resource name “/batch-resources” or simply “/batch”.

Suppose we have a public library system API. The batch call could look like the following:

GET /batch-resources?list=(/locations,/books,/authors)

The returned result can be organized by resource:

{
  "locations": [
    {
      "id" 1,
      ...
    },
    ...
  ],
  "books": [
    {
      "id": 1,
      ...
    },
    ...
  ],
  "authors": [
    {
      "id": 1,
      ...
    },
    ...
  ]
}

If we need filters or sub-resources we can pass them with the individual resources listed, as if they were individual calls. For example:

GET /batch-resources?list=(/authors?q=term,/authors/1/books)

In this case the results should contain the full request paths and query strings as the keys:

{
  "authors?q=terms": [ ... ],
  "authors/1/books": [ ... ]
}

Although GET feels most natural and is the proper REST verb here, if we need very complex requests all at once in the batch request, we could use a POST request with the specific requests in the POST body. For example:

POST /batch-resources
{
  "requests": [
    "/locations?q=term",
    "/books?q=term",
    "/authors?q=term"
  ]
}

The returned response would again be organized by request key.

Using POST would have the advantage of a much larger potential body for the query; a very large GET request may not be enough when reaching the length limit of the query string.

Note that if you need a very flexible API with batch requests, especially serving many different clients with different requirements, it may be appropriate to consider GraphQL.

 

REST API Design: Maintaining an API Style Guide

In a microservices architecture it is important to independently develop, run and deploy microservices.
However, it is beneficial for the services to be similar enough so that working on many services is seamless for developers.
That means similar deployment strategies, development frameworks and so on. This is why Starter Kits are so useful.
A common approach also makes work across teams much smoother.

An API style guide helps another dimension of this: API design consistency.

For example, suppose have a client library function which depends on a date field. It is much more re-usable across the company if we know all microservices should expose date fields in Unix epoch time with milliseconds or another standard format. When a new microservice is being integrated, we can quickly re-use this function with a new date field without having to figure our or potentially parse a new date format just for this service.

Of course, generally an API should follow good REST principles like well-defined resources and sub-resources, but the more we get into details the more can be spelled out and adopted as convention. Just how far REST principles themselves are taken can be an organization-wide convention.
Further, many details are independent of REST and fully up to the designer; this is where consistency across an organization really helps.

The API Style Guide could be a document on the internal API Developer Portal or other API hub, or simply a wiki or Confluence document.

The following elements are useful to document in an API Style Guide. This is not an exhaustive list.

Resource Naming

For example: use plurals with camel-case or plurals with dashes (e.g. /myResources or /my-resources)

Identifier Format

Example: Use UUID for all IDs or simply integers.
Include canonical IDs for all returned resources under the field “id”.

Date Formats

Example: Use Unix epoch time with milliseconds (or without milliseconds).

Date Field Naming and Human-Readable Versions

For example, always use the convention: “createdAt”, “modifiedAt”.
If these are in Epoch time, a human readable version could be “createdAtPretty”.
See this post for more details.

Slug Fields

Example: always provide a slug name or identifier for items.
See this post for more details.

Header Conventions

E.g.: Use headers named ‘Custom-Header’ or ‘X-Custom-Header’ (although this style is technically deprecated).

Content-type Headers

Always return the proper content-type header like “application/json”.

Controlling Response Format and Language

For example, use the Accept header to request specific data formats like JSON or XML. Another example: use the Accept-Language header to request responses in different languages like English and French.

Conventions for Descriptive Metadata for Lists

The style in which metadata about lists is returned along with list results is a key design decision.
Suppose for our resource /items we also want to know the number of items returned.

We can use a custom header:

Element-Count: 5

Or we can return a top level object and return the items as a field:

{
  "itemCount": 5,
  "items": [ ... ]
}

Such a design decision is very difficult to change once clients depend on the API and start requiring data about the list itself.
We can avoid API versioning and inconsistency by adopting a standard approach in a style guide across the company.

Documentation Approach

We can have a convention on documentation. For example, all services should expose documentation on /docs as OpenAPI (Swagger).

HTTP Status Codes

These should be consistent across most APIs, but we can write down some conventions. For example: a successful update request to a resource should respond with 204 No Content and no body.

Error Response Format

For example, all errors must be in a standard format like the sample below, including a documentation link:

{
  "code": "123",
  "message": "Error message",
  "link": "https://documentation-link"
}

Hypermedia Element Conventions

If services use any hypermedia elements, these could be standardized in the style guide.
For example, for linking to related resources we could specify using HAL links.

Again, this is definitely not an exhaustive list. Any aspect of microservice API design can be written down as a convention in an API Style Guide, making such a guide an asset for Developer Experience both for API developers as well as API consumers.

 

Create a MySQL Docker Container with a Predefined Database

It is often useful to start up a Docker container running a database server with a pre-defined, ready and prepopulated database via an SQL script, usable as soon as the container starts.

This can be a dependency for local development, a dependency for tests, among others.

For macOS: if it is not present, install MySQL with Homebrew to get the MySQL client.

brew install mysql

To define the container we need two files in the same directory. The Dockerfile extending mysql and specifying the start script:

Dockerfile:

FROM mysql

COPY ./create-local-db.sql /tmp

CMD [ "mysqld", "--init-file=/tmp/create-local-db.sql" ]

The SQL script to define the database in the container:

create-local-db.sql:

-- Local database definition.

DROP DATABASE IF EXISTS local_db;

CREATE DATABASE local_db;

USE local_db;

DROP TABLE IF EXISTS books;

CREATE TABLE books (
  id int(10) NOT NULL,
  title varchar(30) NOT NULL DEFAULT '',
  PRIMARY KEY (id)
);

INSERT INTO books VALUES(1, 'Book 1');
INSERT INTO books VALUES(2, 'Book 2');

Build the container in the directory with Dockerfile, tagging it with the name my_db (for example):

docker build -t my_db .

Run the container on port 3306:

docker run -e MYSQL_ROOT_PASSWORD=pw -p 3306:3306 my_db

Note that we must pass the root password environment variable to the server.
(For docker-compose this would go under environment: )

The server should indicate that it is ready for connections.

In another terminal, connect to the server with MySQL client:

mysql --host=127.0.0.1 --port=3306 -u root -p

You should see the MySQL prompt and be able to run queries.

mysql> use local_db;
Database changed

mysql> show tables;
+--------------------+
| Tables_in_local_db |
+--------------------+
| books |
+--------------------+
1 row in set (0.00 sec)

The MySQL database inside the Docker container is ready to use.