import PIL, urllib
from matplotlib import pyplot as plt
import numpy as np
1701) # for the NCC-1701
np.random.seed(
def read_image(url):
return np.array(PIL.Image.open(urllib.request.urlopen(url)))
= "https://static.wikia.nocookie.net/memoryalpha/images/2/23/Star_Trek_TNG_cast.jpg/revision/latest?cb=20200323081735&path-prefix=en"
url = read_image(url)
img ; plt.axis("off")
plt.imshow(img)'TNG.png', pad_inches=0) plt.savefig(
Unsupervised Learning with Linear Algebra
In this blog post, I try to experiment with some machine learning approaches to compress some data.
But first we need some data, so I implemented the code below to grab us some from the internet. And specifically, we’re gonna need some Data, Crusher, Troi Picard, La Forge, Worf, and Riker.
# Set the figure size for the plot
=(15, 5))
plt.figure(figsize
# Function to convert an image to grayscale as supplied from the blog docs
def to_greyscale(img):
return 1 - np.dot(img[..., :3], [0.2989, 0.5870, 0.1140])
# Convert the image to grayscale
= to_greyscale(img)
grey_img
# Plotting the original image
121)
plt.subplot(
plt.imshow(img)"off")
plt.axis("original")
plt.title(
# Plotting the grayscale image
122)
plt.subplot(="Greys")
plt.imshow(grey_img, cmap"off")
plt.axis("greyscale")
plt.title(
# Save the figure as an image file
'TNG.png', pad_inches=0) plt.savefig(
Compression
As you can see, I converted the image to greyscale to help with the later process of compression.
For the sake of this exercise, it will be a lot easier to do with a bunch of single brightness values defining the difference from white to black than working with three separate color values.
Plus it gives it a cool retro vibe :)
Now let’s get into how I am going to compress this image. I am going to have Captain Picard explain:
Captain’s Log, Stardate 100944.83:
We have made a remarkable discovery regarding the compression of grayscale images. Singular Value Decomposition (SVD) has proven to be a valuable tool in this endeavor. Allow me to elucidate the process:
Step 1: Prepare the Image
Prepare the grayscale image for compression. This involves converting it to a grayscale format, focusing solely on the shades of gray that shape its essence. The stellar code provided above has been implemented to do this for us.
Step 2: Engage SVD
Engage the extraordinary power of Singular Value Decomposition. SVD dissects the image into three significant components: U, Sigma, and V. Each element holds crucial information about the image’s structure.
Step 3: Select the Singular Values
Decisions must be made regarding the number of singular values to retain. This parameter, denoted as K, governs the balance between compression and image fidelity. Choose wisely, as the optimal K will define the outcome.
Step 4: Precise Approximation
Utilize the chosen K to create an approximation of the image. By retaining only the most significant singular values and their corresponding components, we achieve a compressed representation of the image while minimizing information loss.
Step 5: Reconstruct the Image
Reconstruct the compressed image by combining the retained singular values, their components, and the V matrix. Witness the image emerge, akin to the restoration of a cherished artifact.
Step 6: Assess the Results
Evaluate the compression achieved, considering storage requirements and image quality. Balance is key, for a successful compression strikes a harmonious chord between compactness and faithful representation.
Through this application of Singular Value Decomposition, we unveil a new avenue for grayscale image compression. This technique empowers us to reduce data while preserving the image’s essential characteristics. A remarkable voyage indeed.
Captain Jean-Luc Picard, out.
import numpy as np
import warnings
def svd_reconstruct(img, num_sv=10, comp_factor=0, threshold=0):
# Get the dimensions of the image
= img.shape
m, n = np.array(img) # Convert the image to a numpy array
data
# Calculate the number of singular values (num_sv) if a compression factor (comp_factor) is specified
if comp_factor != 0:
= int((comp_factor * n * m) / (n + m + 1))
num_sv
# Check if num_sv is larger than the dimensions of the image and adjust if necessary
if num_sv > m or num_sv > n:
"WARNING: num_sv does not fit dimensions of img")
warnings.warn(= min(n, m)
num_sv
# Perform singular value decomposition (SVD) on the image data
= np.linalg.svd(data)
U, sigma, V
# Create a diagonal matrix D with singular values
= np.zeros_like(data, dtype=float)
D min(data.shape), :min(data.shape)] = np.diag(sigma)
D[:
# Determine the components for approximation based on whether a threshold is specified or not
if threshold == 0:
= U[:, :num_sv]
U_ = D[:num_sv, :num_sv]
D_ = V[:num_sv, :]
V_ else:
# Count the number of singular values larger than the threshold
= np.count_nonzero(D > threshold)
new_num_sv
# Adjust new_num_sv if it exceeds the image dimensions
if new_num_sv > m or new_num_sv > n:
= min(n, m)
new_num_sv
# Use new_num_sv if it is smaller than the specified components (num_sv)
if new_num_sv < num_sv:
= new_num_sv
num_sv
= U[:, :num_sv]
U_ = D[:num_sv, :num_sv]
D_ = V[:num_sv, :]
V_
# Reconstruct the image using the selected components
= U_ @ D_ @ V_
data_
# Calculate the storage size and compression ratio
= (m * num_sv + num_sv + num_sv * n)
storage = storage / (n * m)
fraction
return data_, round(fraction * 100, 2)
I implemented the code above using the examples provided in blog post documentation and help hours. I call the K constant “num_sv” as it represents the number of singular values to be retained and used in our compression factors. Otherwise, the comments are pretty self explanatory, where the svd_reconstruct function takes a grayscale image and performs singular value decomposition on it. It allows for compression of the image by retaining a specified number of singular values or by applying a threshold to select significant singular values, and returns the reconstructed image along with the storage size and compression ratio. Next let’s look at a quick comparison.
def compare_svd_reconstruct(img, num_sv=10, comp_factor=0, threshold=0):
# Convert the input image to a numpy array
= np.array(img)
A
# Reconstruct the image using singular value decomposition (SVD)
= svd_reconstruct(A, num_sv, comp_factor=comp_factor, threshold=threshold)
A_, storage_percent
# Get the dimensions of the original image
= A.shape
m, n
# Create a figure with two subplots for comparing the original and reconstructed images
= plt.subplots(1, 2, figsize=(18, 9))
fig, axarr
# Display the original image in the first subplot
0].imshow(A, cmap="Greys")
axarr[0].axis("off")
axarr[0].set(title="original greyscale image")
axarr[
# Display the reconstructed image in the second subplot
1].imshow(A_, cmap="Greys")
axarr[1].axis("off")
axarr[= "reconstructed image\n" + str(storage_percent) + "% storage"
img_title 1].set(title=img_title) axarr[
50) compare_svd_reconstruct(to_greyscale(img),
50) compare_svd_reconstruct(to_greyscale(img),
We can see here that this number of components used to compress the image (50) is probably not enough. So let’s run some experiments to see what might be a good number of singular values for our compression.
def svd_experiment(img):
= 5
rows = 3
columns
# Create a figure with subplots for displaying the results
= plt.subplots(rows, columns, figsize=(20, 25))
fig, axarr
# Iterate over the rows and columns of the subplots
for i in range(rows):
for j in range(columns):
# Determine the number of components (k) based on the current row and column
= (i * 4 + j + 1) * 15
k
# Reconstruct the image using the specified number of components
= svd_reconstruct(img, k)
A_, storage_percent
# Create a title for the subplot indicating the number of components and storage percentage
= str(k) + " components\n" + str(storage_percent) + "% storage"
img_title
# Display the reconstructed image in the current subplot
="Greys")
axarr[i][j].imshow(A_, cmap"off")
axarr[i][j].axis(set(title=img_title)
axarr[i][j].
# Adjust the spacing between subplots
fig.tight_layout()
svd_experiment(to_greyscale(img))'experiment.png') plt.savefig(
Now we’re getting somewhere. The first few tests clearly have too few SVs, but by the 2nd row, much of the image is already there. And by the 3rd or 4th row, the compression artifacts created by our method are much less distracting and the image is difficult to discern from the original without pixel peeping. This is particularly noticeable on the crew’s clothing, where the compression has a hard time doing smooth gradients. By around 225 components would be where I draw the line:
225) compare_svd_reconstruct(to_greyscale(img),
Indeed, by utilizing singular value decomposition (SVD) for image compression, we can significantly reduce the size of the image while preserving its essential features. This method works well for compressing this image of TNG crew, but we may have to tweak it in the future for other uses. In my estimate, achieving a compression down to approximately 23% of the original size demonstrates the effectiveness of SVD-based compression in reducing storage requirements without sacrificing essential image quality.