Hi!I attached the file that has all applications you gave me. Could you please add few things?1. More explanation of the application and the examples. Explain the terms and the algorithms of each example.2. Mention the reference number in the explanation, so everyone can know what reference you used.The Goal of this paper is for general readers. Anyone with no backgrounds should understand everything in the paper.Please make sure all reference are listed. I might missed some while putting all application together.Summary of Linear Algebra Application in Computer Science
I.
Google Eigenvector
The concept of ranking webpages was first introduced by S. Brin and L. Page in
1998. Let the web consists of webpages and most of them pointing to other webpages.
Assign a number to each webpage. If page points to page , then this is an outlink for page
, whereas if page points to page , then this is an inlink for page (a page is not supposed
to link to itself). Represent this connectivity structure of the web by an × adjacency
matrix , where = [ , ], where , = 1 if page has an outlink to page and , = 0 if page
does not have an outlink to page . Thus, the th column describes the outlinks of page
and the th row describes the inlinks of page .
Example :
So, page 1 has one outlink since 3,1 = 1 and three inlinks since 1,2 = 1,3 = 1,4 = 1.
This connectivity structure is illustrated by the directed graph of figure above.
A page that is pointed to very often would be considered important. The ranking of any
page is entirely defined by matrix . Here are some rules with increasing sophistication:
1. The ranking should grow with the number of page ’s inlinks.
2. The ranking should be weighted by the ranking of each of page ’s
inlinks.
3. Let page have an inlink from page . Then the more outlinks page
has, the less it should contribute to .
Let represent the total number of outlinks of page . This is simply the sum
of all elements of the th column of C.
,
Google matrix : = [ , ] = [ , ] . In our example above, we have

Let = [ 1 , . . . , ] , then = , so is an eigenvector corresponding to the
eigenvalue 1, and we find that 1 is the largest eigenvalue of , we call as a
stationary vector.
We can find by setting all = 1 as the iterations converge, the solution is
found.
contains the ranking of every page, called page rank by Google.
In our example above, we have = [0.67, 0.33, 1, 0.33] . Because 3 = 1 is the
largest component, therefore page 3 has the highest ranking.
II.
image storage.
Let we want to save image above, because it contains two colors, pink and blue
where the top half is pink and the bottom half is blue, so it is saved as a 6 × 6

matrix =
.

[ ]
We know that the singular value decomposition of matrix above is
1
1
1 [
0
0
[0]

0
0
] + 0 [
1
1
[1]

].
Originally, we used up “36” memory to store our image, but by using the
singular value decomposition of matrix , we only need “24” memory, so we
spend less memory to store the same image.
III.
Encoding and Decoding
Background: Encryption is the data transformation into some unreadable form.
So, not all people can understand it even who can view the encrypted data.
Decryption is the transformation of the encrypted data into some understandable
form. Currently the government uses sophisticated message encoding and
decoding methods. It is very difficult to decode messages that use a very large
matrix to encode (matrix coding) a message. The recipient of the message
translates it using the inverse of the matrix (matrix decoding).
Example: Let the person X want to send a message to person Y, but X does not
want everyone to know it except Y, so X gives the message after encoding it first.
−2 −1
Let the message be “THIS IS A CAT” and the encoding matrix be = [
].
1
2
First, X assigns each letter of the alphabet into A=0, B=1, C=2,…, Z=25, and
space=26, thus the message becomes:
19 7 8 18 26 8 18 26 0 2 0 19
Because the encoding matrix is a 2 × 2 matrix, X breaks the message into 2 × 1
vectors
19 8 26 18 0 0
[ ][ ][ ][ ][ ][ ]
7 18 8 26 2 19
And write the vector above as columns of matrix and multiply it with the encoding
matrix
−2 −1 19
[
][
7
1
2
8
18
26 18
8 26
0 0
−45
]=[
2 19
33
−34 −60
44
42
−62 −2 −19
]
70
4
38
So, X sends message to Y in the form
−45, 33, −34,44, −60,42, −62,70, −2,4, −19,38
After Y receives the message above and the encoding matrix A, Y need to find the
2
1


inverse of A, that is [ 1 3 2 3] and then perform the matrix multiplication
3
2
1

3] [−45
[ 3
1
2
33
3
3

3
−34 −60
44
42
19
−62 −2 −19
]=[
7
70
4
38
8 26
18 8
18
26
0 0
]
2 19
The last step, Y transform each element in the matrix into letter, so Y get the
message is “THIS IS A CAT.”
IV.
Linear Code
Background: A linear code of length over field is a subspace of . Since a
linear code is a vector space, all its elements can be described in terms of a basis.
So we can have algorithms that yield a basis for a given linear code. First, let be
a nonempty subset of , form a matrix whose rows are the words in . Use
elementary row operation to find a row echelon form of , then the nonzero rows
of the row echelon form a basis for =< >.
Example: Suppose that = 3, = {12101,20110,01122,11010}, and =
( ). We can find a basis for by doing a elementary row operation on matrix
12101
20110
=[
]
01122
11010
12101
12101
12101
20110
02211
01122
[
]→[
]→[
]
01122
01122
00001
11010
02212
00000
Therefore {12101,01122,00001} is a basis for .
V.
Image Convolution
Background: Convolution is an element-wise multiplication of two matrices
followed by their addition. We can think of the image as a large matrix and the
convolution matrix as a matrix used for image processing functions (sharpening,
edge detection, blurring).
Example: Consider the image as a matrix, for example the given image
10
], if we apply a convolution
10
1 0.5
10 5
[
] to matrix , we get matrix [
] and the corresponding
0 1
0 10
has a corresponding matrix = [
10
10
image is
it can be seen that the red and green color are the same as the
previous image, but the yellow and dark blue colors change to white
and light blue respectively due to the convolution.
VI.
Natural Language Processing (NLP)
Background: Natural Language Processing (NLP) is a branch of artificial
intelligence that deals with the interaction between computers and human
language. Since computers cannot understand words, we have to convert them
into numbers. The Word embeddings is the way to represent words as vectors of
numbers while preserving their context. Word2vec is one of the most widely used
word embeddings, it can produce a better representation of words by learning the
meaning of the words in the context. Word2vec uses two methods, namely
continuous bag of words (predicts the current word with context words in a
specific window) and skip gram (predicts surrounding context words in a specific
window based on the current word).
Let we have a sentence “My father is a fisherman.” Thus our sentence consists
of vocabulary: “my”, “father”, “is”, “a”, “fisherman”. Each of the words represented
by vector. Example:
→ [1 0 0
ℎ → [0
→ [0 0
0]
1 0
1 0]
ℎ → [1 0 0
Then, the vector [1 0
VII.
0]
1].
0 0] means we are considering the word “my”.
Dimensionality Reduction
Background: Image data with a high-resolution is represented by a very large
matrix of numbers, a very large matrix takes up more space because the
dimension is much larger than a small matrix, so it can be a challenge for computer
to process high-resolution image data. One of the most used dimensionality
reduction is Singular Value Decomposition (SVD), SVD is a procedure for factoring
a matrix into three matrices, so that the calculation becomes simpler. If the
original image data is represented by a matrix of order × , then by SVD, is
factored to be ∗ , where is an × unitary matrix, is an × matrix,
and is an × matrix.
1
Example: Let = [1
2
0.475
= [0.209
0.855
2 3
1 1], we find that rank(A)=3 and = ∗ , where
4 5
7.848
0
0
00.406 0.781
0.600
0 ],
−0.914 0.348 ], = [ 0
0
0
0.212
−0.002 −0.519
0.305 −0.853 0.423
= [0.583 −0.184 −0.791].
0.753 0.488
0.442
The largest singular value is 7.848, so we select colum 1 from and row 1
from ∗ . An approximate matrix of the original matrix can be constructed
as
= [7.848 0
0] [0.305
0.583
1.137
]
=
[
0.500
0.752
2.047
2.173
0.956
3.912
2.807
1.235].
5.053
We obtain, rank(B)=1. Hence, the result matrix with a lower rank
approximate matrix A.
VIII.
Nonlinear Optimization
Background: The steepest-descent method is the simplest Newton-type method
for nonlinear optimization. On the other hand, this method is inefficient at solving
most problem, because it often converges slowly but the overall costs of solving
an optimization problem are high. In this method, we need to search the search
direction from = −∇ ( ) and then uses a line search to determine +1 =
+ .
Example: (This example is from the book of Linear and Nonlinear
Optimization, 2nd edition by Igor Griva, page 404)
Let we have a problem
1
Minimize ( ) = 2 −
1 0 0
With = [0 5 0 ] and = [−1 −1 −1] .
0 0 25
The steepest-descent direction is
= − ( ) = −( − ).
An exact line search is used so that +1 = + with
= −
( )
.

If the initial guess is 0 = [0 0 0] , then
( 0 ) = 0, ∇ ( 0 ) = [1 1 1] , ‖∇ ( 0 )‖ = 1.7321.
This implies that the step length is 0 = 0.0968, so we get
1 = [−0.0968 − 0.0968 − 0.0968]
Then
( 1 ) = −0.1452, ∇ ( 1 ) = [0.9032 0.5161 − 1.4194] , ‖∇ ( 0 )‖ = 1.7598.
It will take 216 iterations before the norm of the gradient is less than 10−8 .
References
1. Application
to
Cryptography.
(n.d.).
Retrieved
October
23,
2020,
from
http://aix1.uottawa.ca/~jkhoury/cryptography.htm
2. Ling, S., & Xing, C. (2004). 4.4 Bases for linear codes. In Coding theory a first
course. Cambridge, UK: Cambridge University Press.
3. Metwalli, S. (2020, July 24). 5 Applications of Linear Algebra In Data Science.
Retrieved October 23, 2020, from https://towardsdatascience.com/5-applicationsof-linear-algebra-in-data-science-81dfc5eb9d4
4. Metwalli, S. (2020, July 24). 5 Applications of Linear Algebra In Data Science.
Retrieved November 06, 2020, from https://towardsdatascience.com/5applications-of-linear-algebra-in-data-science-81dfc5eb9d4
5. Griva, I., Nash, S., & Sofer, A. (2009). Chapter 12 Methods for Unconstrained
Optimization. In Linear and nonlinear optimization. Philadelphia: Society for
Industrial and Applied Mathematics.
6. Ali, Z. (2019, January 07). A simple Word2vec tutorial. Retrieved November 06,
2020, from https://medium.com/@zafaralibagh6/a-simple-word2vec-tutorial61e64e38a6a1
7. Brownlee, J. (2019, October 18). How to Calculate the SVD from Scratch with
Python. Retrieved November 06, 2020, from
https://machinelearningmastery.com/singular-value-decomposition-for-machinelearning/

Purchase answer to see full
attachment




Why Choose Us

  • 100% non-plagiarized Papers
  • 24/7 /365 Service Available
  • Affordable Prices
  • Any Paper, Urgency, and Subject
  • Will complete your papers in 6 hours
  • On-time Delivery
  • Money-back and Privacy guarantees
  • Unlimited Amendments upon request
  • Satisfaction guarantee

How it Works

  • Click on the “Place Order” tab at the top menu or “Order Now” icon at the bottom and a new page will appear with an order form to be filled.
  • Fill in your paper’s requirements in the "PAPER DETAILS" section.
  • Fill in your paper’s academic level, deadline, and the required number of pages from the drop-down menus.
  • Click “CREATE ACCOUNT & SIGN IN” to enter your registration details and get an account with us for record-keeping and then, click on “PROCEED TO CHECKOUT” at the bottom of the page.
  • From there, the payment sections will show, follow the guided payment process and your order will be available for our writing team to work on it.