Calculating Various EMAs in Python Using the Technical Analysis Library

The following example shows how to calculate Exponential Moving Average (EMA) values for a stock, using various periods, in Python.

We are using the TA (Technical Analysis) library.

The example below calculates the 10 EMA on a one minute chart and the 25 EMA on a 1 minute chart.

The window parameter can be adjusted to any desired value to calculate other useful periods such as the 50 or 200 EMA.

The example uses Yahoo Finance data via the yfinance library.

calculate-emas.py:

import yfinance as yf

from ta.trend import EMAIndicator

ticker = 'META'

EMA10_WINDOW = 10

EMA25_WINDOW = 25

# EMAs on 1 minute timeframe.
data = yf.download(tickers=ticker, 
   period='1d', 
   interval='1m'
)

closeValues = data['Close']

emaIndicator10 = EMAIndicator(close=closeValues, 
   window=EMA10_WINDOW
)

emaIndicator25 = EMAIndicator(close=closeValues, 
   window=EMA25_WINDOW
)

# These return Pandas series.
emaSeries10 = emaIndicator10.ema_indicator()

emaSeries25 = emaIndicator25.ema_indicator()

print('Last 5 values for EMA 10: ')
print(emaSeries10.tail(5))

print()

print('Last 5 values for EMA 25: ')
print(emaSeries25.tail(5))

Example run:

$ python calculate-emas.py

[*********************100%%**********************] 
1 of 1 completed

Last 5 values for EMA 10:
Datetime
2024-11-04 14:57:00-05:00 562.326530
2024-11-04 14:58:00-05:00 562.321704
2024-11-04 14:59:00-05:00 562.249574
2024-11-04 15:00:00-05:00 562.289524
2024-11-04 15:01:00-05:00 562.278698
Name: ema_10, dtype: float64

Last 5 values for EMA 25:
Datetime
2024-11-04 14:57:00-05:00 562.883210
2024-11-04 14:58:00-05:00 562.838347
2024-11-04 14:59:00-05:00 562.768088
2024-11-04 15:00:00-05:00 562.745104
2024-11-04 15:01:00-05:00 562.705480
Name: ema_25, dtype: float64

References

https://technical-analysis-library-in-python.readthedocs.io/en/latest/ta.html

Preventing Invalid Traffic Concerns with Google AdSense

Using Google AdSense, we may encounter issues with an ad serving limit placed on the account due to invalid traffic concerns.

There are many spam bots and automated scripts running on the Internet which may contribute to website traffic, but this traffic does not represent legitimate users.

Specifically to WordPress sites, the first step to eliminate the invalid traffic is to install plugins to block spam and click fraud. Two good options are given below.

For preventing click fraud:

ClickCease Click Fraud Protection

For preventing comment spam:

Antispam Bee

Further, one of the sources of the invalid traffic is automated hacking attempts from various scripts.

Bots scanning websites are attempting to run exploits, e.g. against login pages.

Therefore, we see many requests in the server logs with attempts to reach:

/wp-admin/login/
/wp-admin/login/login.php

and so on.

If the login page URL is completely unknown, it helps to stop these kinds of requests: if the usual default login URL returns a 404 a script will likely not try as many further malicious requests.

We want to make it difficult to predict the actual valid URL.

First, generate a UUID (Universally Unique ID) for the login page using:

Online UUID Generator

The custom login URL can be, for example:

{some-site.com}/{UUID}_custom_login

Because the probability of guessing a specific UUID is very low, it should be very difficult to reach the valid custom login URL from any external script.

To change the login URL use the plugin:

Change wp-admin login

This adds new settings in WordPress. Look under:

Settings / Permalinks / Change wp-admin login

Enter the customized URL here and click Save Changes.

 

Challenge Failed Error with Certbot Renewal

When attempting to renew an SSL Certificate for a domain using Certbot on Ubuntu, we may encounter the following problem:

Renewing an existing certificate for <www.my-domain.com>
Performing the following challenges:
http-01 challenge for <www.my-domain.com>
Waiting for verification...
Challenge failed for domain <www.my-domain.com>

This may mean that the domain cannot be accessed properly, i.e. the test requires an external connection to our server to succeed over TLS (SSL).
We need to make sure that the domain can be reached on port 443.

Thus, ensure the firewall (UFW) allows connections on port 443:

$ ufw allow 443

Depending on the user executing the command, this may need to be run as superuser:

$ sudo ufw allow 443

We can now try the Certbot renew command again.

 

Get RSI Values for a Stock using Yahoo Finance Data

We can use the Python Technical Analysis Library (ta) for the RSI (Relative Strength Index) calculation.

A simple and free API for finance data is Yahoo Finance (yfinance); the most convenient way to call it is using the yfinance Python library.

First, install the Technical Analysis library:

$ pip install ta

Install the Yahoo Finance library:

$ pip install yfinance

We use NVDA as an example, on a 5 minute timeframe.

The following script will retrieve the data and pass the results to the TA library to calculate the RSI.
The results are stored in a Pandas series; after retrieval we print out a sample of the latest values.

import yfinance as yf

from ta.momentum import RSIIndicator

ticker = 'NVDA'

# Make sure to use a window with enough data for the RSI calculation.
data = yf.download(tickers=ticker, period='5d', interval='5m')

closeValues = data['Close']

# Use the common 14 period setting.
rsi_14 = RSIIndicator(close=closeValues, window=14)

# This returns a Pandas series.
rsiSeries = rsi_14.rsi()

# Latest 10 values of the day for demonstration.
print(rsiSeries.tail(10))

Below is the output for a single run. The series contains timestamps and RSI values.

Datetime
2024-05-24 12:35:00-04:00    52.733213
2024-05-24 12:40:00-04:00    56.698742
2024-05-24 12:45:00-04:00    56.991273
2024-05-24 12:50:00-04:00    62.709294
2024-05-24 12:55:00-04:00    55.964462
2024-05-24 13:00:00-04:00    56.631537
2024-05-24 13:05:00-04:00    51.238671
2024-05-24 13:10:00-04:00    53.740376
2024-05-24 13:15:00-04:00    51.827860
2024-05-24 13:20:00-04:00    52.300520
Name: rsi, dtype: float64

 

Issue with Getting All Zeros from the Camera Using OpenCV in Python

Reading images from the camera (on macOS, for example) using Python with OpenCV may return vectors full of zero values even though the camera turns on correctly.

This may be due to the fact that we are not waiting for the data properly.

The solution is to make sure to read the data in a loop, i.e. to wait for the data to arrive instead of expecting it to be available instantly.
One single read may not be enough. The code below illustrates the situation.

The following will most likely return all zeros. Note that even though the isOpened() property returns True and the success value from captureObject.read() is also True, the camera data will not be ready.

import cv2

captureObject = cv2.VideoCapture(0)

if captureObject.isOpened():
  success, testImage = captureObject.read()

  # This may be True despite all-zeros images.
  print(success)

  # Numeric values of the image.
  print(testImage)

The following code shows how to properly wait for the camera data to be available in a loop. Note that the first few reads will output zeros as well.
Afterward, a stream of numeric values should be output until we exit the script.

import cv2

captureObject = cv2.VideoCapture(0)

while captureObject.isOpened():
  success, testImage = captureObject.read()

  # Numeric values of the image.
  print(testImage)

  # Use CTRL-C to exit.

 

Bounding Boxes Field is None in Ultralytics YOLO Model Results

When working with object detection using Ultralytics YOLO v8 in Python and attempting to add bounding boxes for classified objects to the camera image, it is possible to encounter a problem with the boxes field being undefined (equal to None).

The solution is to make sure you are using the yolov8n.pt model and not yolov8n-cls.pt: the latter does not seem to have this value set.

The -cls version of the model only returns text descriptions and not the bounding boxes.

In short the solution is to load the model using:

YOLO("yolov8n.pt")

instead of:

YOLO("yolov8n-cls.pt")

The following code shows a complete example of classifying objects using YOLO and adding bounding boxes.
Comments indicate where the problem with boxes being equal to None appears.

import cv2

from ultralytics import YOLO

captureObject = cv2.VideoCapture(0)
captureObject.set(3, 840)
captureObject.set(4, 780)

# Do not use yolov8n-cls.pt unless you do not need bounding boxes.
yoloModel = YOLO("yolov8n.pt")

# Get all class labels.
classLabels = list(yoloModel.names.values())

# Main loop.
while True:
  ret, img = captureObject.read()
  cv2.imshow("webcam", img)

  # Classify objects.
  results = yoloModel(img, stream=True)

  for r in results:
    boundingBoxes = r.boxes
    # The value of boxes is None if using yolov8n-cls.pt
    if boundingBoxes != None:
      for box in boundingBoxes:
        # Get coordinates.
        x1, y1, x2, y2 = box.xyxy[0]
        # Convert to integer types.
        x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2)

        # Draw bounding box rectangle inside camera image.
        cv2.rectangle(img, 
          (x1, y1), 
          (x2, y2), 
          (255, 0, 255), 
          3)

        # Add classification label on top of bounding box.
        classIndex = int(box.cls[0])
        label = classLabels[classIndex]
        cv2.putText(img, 
          label, 
          [x1, y1], 
          cv2.FONT_HERSHEY_SIMPLEX, 
          1, 
          (255, 0, 0), 
          2)

  # Re-paint with overlay rectangles.
  cv2.imshow("webcam", img)

  # Exit with 'q' key.
  if cv2.waitKey(1) == ord("q"):
    break

captureObject.release()
cv2.destroyAllWindows()

 

Get All Values in a Python Dictionary

Sometimes we need to get a list of all of the values only (without keys) from a Python dictionary.

Suppose we have a dictionary numbers defined as follows:

numbers = dict()

numbers["a"] = 1
numbers["b"] = 2
numbers["c"] = 3
numbers["d"] = 4

To return a list of just the values at all of the keys from this data structure, we can use the values method.
Note that the output needs to be converted to a regular list.

result = list(numbers.values())

The result is:

[1, 2, 3, 4]

Another option is to use a list comprehension. This is especially useful if we want to do some further computations on all of the values immediately.

result = [numbers[key] for key in numbers.keys()]

The result is:

[1, 2, 3, 4]

The built-in function keys() returns all keys in the dictionary.
Then, numbers[key] is called for each key to get the value at that key.
Finally, the list comprehension results in a list of all values.

 

Simple RAG with a Locally Running LLM

This is a simple example of a RAG (Retrieval-Augmented Generation) application with a locally running LLM (Large Language Model).

For this example we will use Mistral running with Ollama on macOS.

See this post for more details on how to get it up and running.

First, ensure the model is running and responding to queries over HTTP:

$ curl -X POST http://localhost:11434/api/generate
       -d '{"model":"mistral", "prompt":"Hello"}'

This should reply with a stream of tokens.

The idea of Retrieval Augmented Generation is to append information to the prompt which is not otherwise available to the model.

A simple example piece of data is the current system time. Normally, a language model does not have access to that information. If we ask:

>>> What time is it?
I don't have access to the current time, 
but you can use a world clock website or app 
to find out the current time in your location.

The following script uses RAG to append the current time to the prompt, so the LLM can answer with this new context.

simple-rag-request.py:

import json
import requests

from datetime import datetime

# Function to get extra data for RAG.
def getRAGData():
  currentTime = datetime.now().strftime("%I:%M %p")
  return "Current time is: " + currentTime + ". "

# Main program.
inputPrompt = input("Prompt: ")

API_URI = "http://localhost:11434/api/generate"

# API request body.
postBody = dict()
postBody["model"] = "mistral"

combinedPrompt = getRAGData() + inputPrompt
postBody["prompt"] = combinedPrompt
postBody["stream"] = False

result = requests.post(API_URI, json=postBody)

jsonResult = json.loads(result.text)
finalResponse = jsonResult["response"]

print(finalResponse)

Now we can run the script and see how the extra information informs the result:

$ python simple-rag-request.py
Prompt: what time is it?

The current time is 9:23 PM.

This idea is easily extended to querying proprietary data in our own databases, or any other data we wish to inject.

 

Run the Mistral 7B LLM Locally

We can run the Mistral 7B (seven billion parameter) Large Language Model locally easily using Ollama. In this example we assume running on macOS.

First, install Ollama.

Download the installer from:

https://github.com/jmorganca/ollama

Double-click the app to install the binary command.

Now, in a terminal, run:

$ ollama --version

The output should be similar to:

ollama version 0.1.13

If the command is successfully installed, we can download the Mistral 7B model with:

$ ollama run mistral

This will download and start the model.

Once loaded, we should see:

>>> Send a message (/? for help)

Now, try test a prompt:

>>> What is the capital of Estonia?

The capital of Estonia is Tallinn.

 

Empty Error When Running Llama with llama-cpp

When running multiple open source Llama Large Language Models (LLMs) in the command line with llama-cpp and the command line llm command, we may encounter an empty error such as:

$ llm -m modelName "test"
Error:

The empty error provides no clues, but this can happen if we have the incorrect version of llama-cpp installed for the model we are using.
Different models may use different incompatible file formats internally, so we must ensure we have the correct version of llama-cpp for the given model.

For example, for LLama-2 Uncensored, we can use llama-cpp-python version 0.1.78.

For Llama-2, we can use version 0.2.11.

The following installed versions work at the time of writing:

For Llama-2 Uncensored, install using:

$ pip install llama-cpp-python==0.1.78

For Llama-2, use:

$ pip install llama-cpp-python==0.2.11

We can check which version of llama-cpp is installed using:

$ llm --version

To see all the models installed use:

$ llm models

To run a test again after switching versions:

$ llm -m modelName "test prompt"