Challenge Failed Error with Certbot Renewal

When attempting to renew an SSL Certificate for a domain using Certbot on Ubuntu, we may encounter the following problem:

Renewing an existing certificate for <www.my-domain.com>
Performing the following challenges:
http-01 challenge for <www.my-domain.com>
Waiting for verification...
Challenge failed for domain <www.my-domain.com>

This may mean that the domain cannot be accessed properly, i.e. the test requires an external connection to our server to succeed over TLS (SSL).
We need to make sure that the domain can be reached on port 443.

Thus, ensure the firewall (UFW) allows connections on port 443:

$ ufw allow 443

Depending on the user executing the command, this may need to be run as superuser:

$ sudo ufw allow 443

We can now try the Certbot renew command again.

 

Get RSI Values for a Stock using Yahoo Finance Data

We can use the Python Technical Analysis Library (ta) for the RSI (Relative Strength Index) calculation.

A simple and free API for finance data is Yahoo Finance (yfinance); the most convenient way to call it is using the yfinance Python library.

First, install the Technical Analysis library:

$ pip install ta

Install the Yahoo Finance library:

$ pip install yfinance

We use NVDA as an example, on a 5 minute timeframe.

The following script will retrieve the data and pass the results to the TA library to calculate the RSI.
The results are stored in a Pandas series; after retrieval we print out a sample of the latest values.

import yfinance as yf

from ta.momentum import RSIIndicator

ticker = 'NVDA'

# Make sure to use a window with enough data for the RSI calculation.
data = yf.download(tickers=ticker, period='5d', interval='5m')

closeValues = data['Close']

# Use the common 14 period setting.
rsi_14 = RSIIndicator(close=closeValues, window=14)

# This returns a Pandas series.
rsiSeries = rsi_14.rsi()

# Latest 10 values of the day for demonstration.
print(rsiSeries.tail(10))

Below is the output for a single run. The series contains timestamps and RSI values.

Datetime
2024-05-24 12:35:00-04:00    52.733213
2024-05-24 12:40:00-04:00    56.698742
2024-05-24 12:45:00-04:00    56.991273
2024-05-24 12:50:00-04:00    62.709294
2024-05-24 12:55:00-04:00    55.964462
2024-05-24 13:00:00-04:00    56.631537
2024-05-24 13:05:00-04:00    51.238671
2024-05-24 13:10:00-04:00    53.740376
2024-05-24 13:15:00-04:00    51.827860
2024-05-24 13:20:00-04:00    52.300520
Name: rsi, dtype: float64

 

Issue with Getting All Zeros from the Camera Using OpenCV in Python

Reading images from the camera (on macOS, for example) using Python with OpenCV may return vectors full of zero values even though the camera turns on correctly.

This may be due to the fact that we are not waiting for the data properly.

The solution is to make sure to read the data in a loop, i.e. to wait for the data to arrive instead of expecting it to be available instantly.
One single read may not be enough. The code below illustrates the situation.

The following will most likely return all zeros. Note that even though the isOpened() property returns True and the success value from captureObject.read() is also True, the camera data will not be ready.

import cv2

captureObject = cv2.VideoCapture(0)

if captureObject.isOpened():
  success, testImage = captureObject.read()

  # This may be True despite all-zeros images.
  print(success)

  # Numeric values of the image.
  print(testImage)

The following code shows how to properly wait for the camera data to be available in a loop. Note that the first few reads will output zeros as well.
Afterward, a stream of numeric values should be output until we exit the script.

import cv2

captureObject = cv2.VideoCapture(0)

while captureObject.isOpened():
  success, testImage = captureObject.read()

  # Numeric values of the image.
  print(testImage)

  # Use CTRL-C to exit.

 

Bounding Boxes Field is None in Ultralytics YOLO Model Results

When working with object detection using Ultralytics YOLO v8 in Python and attempting to add bounding boxes for classified objects to the camera image, it is possible to encounter a problem with the boxes field being undefined (equal to None).

The solution is to make sure you are using the yolov8n.pt model and not yolov8n-cls.pt: the latter does not seem to have this value set.

The -cls version of the model only returns text descriptions and not the bounding boxes.

In short the solution is to load the model using:

YOLO("yolov8n.pt")

instead of:

YOLO("yolov8n-cls.pt")

The following code shows a complete example of classifying objects using YOLO and adding bounding boxes.
Comments indicate where the problem with boxes being equal to None appears.

import cv2

from ultralytics import YOLO

captureObject = cv2.VideoCapture(0)
captureObject.set(3, 840)
captureObject.set(4, 780)

# Do not use yolov8n-cls.pt unless you do not need bounding boxes.
yoloModel = YOLO("yolov8n.pt")

# Get all class labels.
classLabels = list(yoloModel.names.values())

# Main loop.
while True:
  ret, img = captureObject.read()
  cv2.imshow("webcam", img)

  # Classify objects.
  results = yoloModel(img, stream=True)

  for r in results:
    boundingBoxes = r.boxes
    # The value of boxes is None if using yolov8n-cls.pt
    if boundingBoxes != None:
      for box in boundingBoxes:
        # Get coordinates.
        x1, y1, x2, y2 = box.xyxy[0]
        # Convert to integer types.
        x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)

        # Draw bounding box rectangle inside camera image.
        cv2.rectangle(img, 
          (x1, y1), 
          (x2, y2), 
          (255, 0, 255), 
          3)

        # Add classification label on top of bounding box.
        classIndex = int(box.cls[0])
        label = classLabels[classIndex]
        cv2.putText(img, 
          label, 
          [x1, y1], 
          cv2.FONT_HERSHEY_SIMPLEX, 
          1, 
          (255, 0, 0), 
          2)

  # Re-paint with overlay rectangles.
  cv2.imshow("webcam", img)

  # Exit with 'q' key.
  if cv2.waitKey(1) == ord("q"):
    break

captureObject.release()
cv2.destroyAllWindows()

 

Get All Values in a Python Dictionary

Sometimes we need to get a list of all of the values only (without keys) from a Python dictionary.

Suppose we have a dictionary numbers defined as follows:

numbers = dict()

numbers["a"] = 1
numbers["b"] = 2
numbers["c"] = 3
numbers["d"] = 4

To return a list of just the values at all of the keys from this data structure, we can use the values method.
Note that the output needs to be converted to a regular list.

result = list(numbers.values())

The result is:

[1, 2, 3, 4]

Another option is to use a list comprehension. This is especially useful if we want to do some further computations on all of the values immediately.

result = [numbers[key] for key in numbers.keys()]

The result is:

[1, 2, 3, 4]

The built-in function keys() returns all keys in the dictionary.
Then, numbers[key] is called for each key to get the value at that key.
Finally, the list comprehension results in a list of all values.

 

Simple RAG with a Locally Running LLM

This is a simple example of a RAG (Retrieval-Augmented Generation) application with a locally running LLM (Large Language Model).

For this example we will use Mistral running with Ollama on macOS.

See this post for more details on how to get it up and running.

First, ensure the model is running and responding to queries over HTTP:

$ curl -X POST http://localhost:11434/api/generate
       -d '{"model":"mistral", "prompt":"Hello"}'

This should reply with a stream of tokens.

The idea of Retrieval Augmented Generation is to append information to the prompt which is not otherwise available to the model.

A simple example piece of data is the current system time. Normally, a language model does not have access to that information. If we ask:

>>> What time is it?
I don't have access to the current time, 
but you can use a world clock website or app 
to find out the current time in your location.

The following script uses RAG to append the current time to the prompt, so the LLM can answer with this new context.

simple-rag-request.py:

import json
import requests

from datetime import datetime

# Function to get extra data for RAG.
def getRAGData():
  currentTime = datetime.now().strftime("%I:%M %p")
  return "Current time is: " + currentTime + ". "

# Main program.
inputPrompt = input("Prompt: ")

API_URI = "http://localhost:11434/api/generate"

# API request body.
postBody = dict()
postBody["model"] = "mistral"

combinedPrompt = getRAGData() + inputPrompt
postBody["prompt"] = combinedPrompt
postBody["stream"] = False

result = requests.post(API_URI, json=postBody)

jsonResult = json.loads(result.text)
finalResponse = jsonResult["response"]

print(finalResponse)

Now we can run the script and see how the extra information informs the result:

$ python simple-rag-request.py
Prompt: what time is it?

The current time is 9:23 PM.

This idea is easily extended to querying proprietary data in our own databases, or any other data we wish to inject.

 

Run the Mistral 7B LLM Locally

We can run the Mistral 7B (seven billion parameter) Large Language Model locally easily using Ollama. In this example we assume running on macOS.

First, install Ollama.

Download the installer from:

https://github.com/jmorganca/ollama

Double-click the app to install the binary command.

Now, in a terminal, run:

$ ollama --version

The output should be similar to:

ollama version 0.1.13

If the command is successfully installed, we can download the Mistral 7B model with:

$ ollama run mistral

This will download and start the model.

Once loaded, we should see:

>>> Send a message (/? for help)

Now, try test a prompt:

>>> What is the capital of Estonia?

The capital of Estonia is Tallinn.

 

Empty Error When Running Llama with llama-cpp

When running multiple open source Llama Large Language Models (LLMs) in the command line with llama-cpp and the command line llm command, we may encounter an empty error such as:

$ llm -m modelName "test"
Error:

The empty error provides no clues, but this can happen if we have the incorrect version of llama-cpp installed for the model we are using.
Different models may use different incompatible file formats internally, so we must ensure we have the correct version of llama-cpp for the given model.

For example, for LLama-2 Uncensored, we can use llama-cpp-python version 0.1.78.

For Llama-2, we can use version 0.2.11.

The following installed versions work at the time of writing:

For Llama-2 Uncensored, install using:

$ pip install llama-cpp-python==0.1.78

For Llama-2, use:

$ pip install llama-cpp-python==0.2.11

We can check which version of llama-cpp is installed using:

$ llm --version

To see all the models installed use:

$ llm models

To run a test again after switching versions:

$ llm -m modelName "test prompt"

 

Classify an Object in an Image in Python Using the YOLO Model

To perform object classification on an image file using Python, we can use the open source pre-trained YOLO model from Ultralytics.

First, install the library using:

$ pip install ultralytics

For example, assume we have an image of a tractor in a local file tractor.jpeg under images/.

Note that we can also run the model from the command line using:

$ yolo predict source='images/tractor.jpeg'

In Python, we need to extract the result from all of the model output, which requires a bit more code.

The model’s predict function will return a list of results with probability values, as well as a list of all labels.

The code below will extract the highest probability label and print it.

from ultralytics import YOLO

model = YOLO("yolov8n-cls.pt")

# Path to an image file assumed to exist.
results = model.predict("images/tractor.jpeg")

# Overall results is a list.
result = results[0]

probabilities = result.probs

# Top1 is the most likely result.
topLabelNumber = probabilities.top1

# Now find the label name for that label number.
allNames = result.names
for labelNumber, label in allNames.items():
  if labelNumber == topLabelNumber:
    resultLabel = label

print("Classification result:")
print(resultLabel)

 

Synthesize Speech in a Different Language using Python

To synthesize speech in Python in a language other than English using pyttsx3, we need to find which voice is available for the desired language.

First, we can print out the list of all available voices.
Each of the voice objects will include a list of languages that the voice supports (usually one).

In this example we will synthesize a string in Polish. For other languages other than English, simply find the voice which supports that language in the full output list of voices.

 

import pyttsx3

synthesizer = pyttsx3.init()

voices = synthesizer.getProperty("voices")

for voice in voices:
  if "zosia" in voice.id: # The Polish voice.
    print(voice.id) # Full ID string.
    print("Languages for voice:")
    print(voice.languages)

synthesizer.setProperty("language", "pl_PL")

synthesizer.setProperty("voice", 
  "com.apple.speech.synthesis.voice.zosia"
)

synthesizer.say("Cześć, jak się masz?")

synthesizer.runAndWait()