Term-Document matrix
A term-document matrix is a matrix which tells you how often a given word is found in a given document. Here is an example taken from the paper "Indexing by Latent Semantic Analysis" by Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer and Richard Harshman:D1 | D2 | D3 | D4 | D5 | D6 | D7 | D8 | D9 | |
---|---|---|---|---|---|---|---|---|---|
human | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
interface | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 |
computer | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
user | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
system | 0 | 1 | 1 | 2 | 0 | 0 | 0 | 0 | 0 |
response | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
time | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |
EPS | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
survey | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
trees | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 |
graph | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 |
minors | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 |
In this example, there are 9 nameless documents (could be websites, Wikipedia articles, anything) and 12 selected terms. Each numbers are the number of times the given term is found in the given document. For example the term "human" is found once in D1 but never occurs in D2.
Term document-matrices are neat because the meaning of both the words and the documents can be characterized by using a row or a column from the matrix. For example the row belonging to the term "human" is [1, 0, 0, 1, 0, 0, 0, 0, 0]. This says something about its meaning because it occurs in D1 and D4. Other words which are also found in the same documents would imply that they are somehow related or similar. On the other hand the column belonging to the document D1 is [1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]. This says something about its meaning as well because it contains the words "human", "interface" and "computer" only. Other documents which also contain the same words would imply that they are also somehow related or similar.
This means that by comparing their respective row or column, that is, their a vector, you can determine how similar two words or documents are. You can even create a search engine which searches the documents based on keywords by treating the keyword list as a tiny document and comparing its vector with the vectors of the documents.
How can you compare vectors? You can use a vector similarity function such as Cosine similarity which given two vectors will give you a number between -1 and 1, depending on how similar they are.
Unfortunately this is not always the case because words in isolation do not discriminate meaning well enough. This is because of synonymous (different words which mean the same thing such as "couch" and "sofa") and polysemous (same word which means different things such as "bank account" and "river bank") words. This means that in the above matrix, the words "human" and "user" which should be similar will have the following vectors:
[1, 0, 0, 1, 0, 0, 0, 0, 0]
[0, 1, 1, 0, 1, 0, 0, 0, 0]
These vectors do not hint at a similarity between the two words. However, although they appear in different documents, they may co-occur with other words which are shared between these documents. For example both "human" and "user" occur in documents which contain the words "computer" and "interface". This indirect co-occurrence can be harnessed by using LSA.
Latent Semantic Analysis
LSA uses linear algebra to bring out these latent (hidden) co-occurrences. In particular, it uses Singular Vector Decomposition (SVD). We will not go into the method used to perform this operation but will instead assume that there is a library which does it for us, in particular we will be using python's NumPy. SVD will basically decompose a matrix into 3 matrices which when multiplied together will give the original matrix again. These matrices are called U, S and V. Here is the python 2.7 code with NumPy which does this:import numpy a = numpy.matrix(""" 1 0 0 1 0 0 0 0 0; 1 0 1 0 0 0 0 0 0; 1 1 0 0 0 0 0 0 0; 0 1 1 0 1 0 0 0 0; 0 1 1 2 0 0 0 0 0; 0 1 0 0 1 0 0 0 0; 0 1 0 0 1 0 0 0 0; 0 0 1 1 0 0 0 0 0; 0 1 0 0 0 0 0 0 1; 0 0 0 0 0 1 1 1 0; 0 0 0 0 0 0 1 1 1; 0 0 0 0 0 0 0 1 1 """) (U, s, V) = numpy.linalg.svd(a, full_matrices=False) S = numpy.diag(s) print U print S print V
U
-0.221350778443 | -0.113179617367 | 0.28895815444 | -0.414750740379 | -0.106275120934 | -0.340983323615 | -0.522657771461 | 0.0604501376207 | 0.406677508728 |
-0.197645401447 | -0.0720877787583 | 0.135039638689 | -0.552239583656 | 0.281768938949 | 0.495878011111 | 0.0704234411738 | 0.00994003720777 | 0.108930265566 |
-0.24047022609 | 0.0431519520879 | -0.164429078921 | -0.594961818064 | -0.106755285123 | -0.254955129767 | 0.302240236 | -0.0623280149762 | -0.492444363678 |
-0.403598863494 | 0.0570702584462 | -0.337803537502 | 0.0991137294933 | 0.331733717586 | 0.384831917013 | -0.00287217529118 | 0.000390504202226 | -0.0123293478854 |
-0.644481152473 | -0.167301205681 | 0.361148151433 | 0.333461601349 | -0.158954979122 | -0.206522587934 | 0.165828574516 | -0.0342720233321 | -0.270696289374 |
-0.265037470035 | 0.107159573274 | -0.425998496887 | 0.073812192192 | 0.0803193764074 | -0.169676388586 | -0.282915726531 | 0.0161465471957 | 0.053874688728 |
-0.265037470035 | 0.107159573274 | -0.425998496887 | 0.073812192192 | 0.0803193764074 | -0.169676388586 | -0.282915726531 | 0.0161465471957 | 0.053874688728 |
-0.300828163915 | -0.141270468264 | 0.330308434522 | 0.188091917879 | 0.114784622474 | 0.272155276471 | -0.0329941101555 | 0.0189980144259 | 0.165339169935 |
-0.205917861257 | 0.273647431063 | -0.177597017072 | -0.0323519366242 | -0.537150003306 | 0.0809439782143 | 0.466897525101 | 0.0362988295337 | 0.579426105711 |
-0.0127461830383 | 0.490161792453 | 0.231120154886 | 0.0248019985275 | 0.594169515589 | -0.392125064311 | 0.28831746071 | -0.254567945176 | 0.225424068667 |
-0.0361358490222 | 0.62278523454 | 0.22308636259 | 0.000700072121447 | -0.0682529381996 | 0.11490895384 | -0.159575476506 | 0.681125438043 | -0.231961226249 |
-0.0317563289336 | 0.450508919351 | 0.141115163889 | -0.00872947061057 | -0.30049511003 | 0.277343396711 | -0.339495286197 | -0.678417878879 | -0.182534975926 |
S
3.34088375213 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0 | 2.54170100004 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0 | 0.0 | 2.35394351766 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0 | 0.0 | 0.0 | 1.64453229237 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0 | 0.0 | 0.0 | 0.0 | 1.50483155049 | 0.0 | 0.0 | 0.0 | 0.0 |
0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.30638195024 | 0.0 | 0.0 | 0.0 |
0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.845903082647 | 0.0 | 0.0 |
0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.560134422839 | 0.0 |
0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.36367684004 |
V
-0.197392802296 | -0.605990268919 | -0.462917508082 | -0.542114416925 | -0.279469108426 | -0.00381521297476 | -0.0146314675059 | -0.0241368353337 | -0.0819573680281 |
-0.0559135177721 | 0.165592877548 | -0.127312061589 | -0.231755228874 | 0.106774717006 | 0.192847936262 | 0.437874882598 | 0.6151218992 | 0.529937071643 |
0.110269729184 | -0.497326493627 | 0.207605952936 | 0.569921445337 | -0.505449906656 | 0.0981842398306 | 0.192955571817 | 0.252903978748 | 0.0792731465334 |
-0.949785023586 | -0.0286488989491 | 0.0416091951387 | 0.267714037747 | 0.15003543258 | 0.015081490733 | 0.0155071875251 | 0.0101990092357 | -0.0245549055501 |
0.0456785564271 | -0.206327277661 | 0.37833623285 | -0.20560471144 | 0.327194409395 | 0.394841213554 | 0.349485347525 | 0.149798472318 | -0.601992994659 |
-0.0765935584562 | -0.256475221191 | 0.72439964169 | -0.368860900846 | 0.034813049761 | -0.300161116158 | -0.212201424262 | 9.74341694268e-05 | 0.36221897331 |
-0.17731829729 | 0.432984244623 | 0.236889703269 | -0.264799522759 | -0.672303529825 | 0.340839827427 | 0.152194721647 | -0.249145920279 | -0.0380341888592 |
0.0143932590528 | -0.0493053257482 | -0.00882550204839 | 0.019466943894 | 0.0583495626425 | -0.454476523485 | 0.761527011149 | -0.449642756709 | 0.0696375496788 |
0.0636922895993 | -0.242782899677 | -0.0240768748334 | 0.0842069016903 | 0.262375876232 | 0.619847193574 | -0.0179751825326 | -0.51989049808 | 0.453506754839 |
Rank reduction
If U, S and V are multiplied together in that order, you'll get the original matrix again (approximately due to rounding errors). Notice that the second matrix S is a diagonal matrix, that is, every element in the matrix is zero except the diagonal. I will not go into the mathematics of this but if you take the second matrix S and replace the smallest number in the diagonal with zeros, when you multiply the three matrices together again you will get a matrix which is similar to the original matrix but which has some important differences. The more of the smallest values in the diagonal are replaced with zeros, the more similar the rows of similar words will become. This only works to a point of course. There has to be a balance between how many values are replaced with zeros and how much information is left in the matrix. Conviniently the values in the diagonal are sorted. If we replace the smallest 7 values with zeros, we get the following:S[2][2] = 0.0 S[3][3] = 0.0 S[4][4] = 0.0 S[5][5] = 0.0 S[6][6] = 0.0 S[7][7] = 0.0 S[8][8] = 0.0 A = U*S*V print A
0.162057973899 | 0.400498283106 | 0.378954540321 | 0.467566261179 | 0.175953674216 | -0.05265494658 | -0.115142842816 | -0.159101981799 | -0.0918382678755 |
0.140585289239 | 0.369800771629 | 0.328996029685 | 0.400427224981 | 0.164972474341 | -0.0328154503866 | -0.0705685702014 | -0.0967682651277 | -0.0429807318561 |
0.152449476913 | 0.505004444164 | 0.357936583955 | 0.410106780092 | 0.236231733239 | 0.0242165157003 | 0.0597805100865 | 0.086857300988 | 0.123966320774 |
0.258049326846 | 0.841123434512 | 0.60571994881 | 0.697357170797 | 0.392317949475 | 0.0331180051639 | 0.0832449070523 | 0.121772385778 | 0.187379725005 |
0.448789754476 | 1.23436483383 | 1.0508614968 | 1.26579559131 | 0.556331394289 | -0.0737899841222 | -0.154693831118 | -0.209598161026 | -0.0488795415146 |
0.159554277475 | 0.581681899927 | 0.375218968496 | 0.416897679847 | 0.276540515564 | 0.0559037446194 | 0.132218498596 | 0.188911459228 | 0.216907605531 |
0.159554277475 | 0.581681899927 | 0.375218968496 | 0.416897679847 | 0.276540515564 | 0.0559037446194 | 0.132218498596 | 0.188911459228 | 0.216907605531 |
0.218462783401 | 0.549580580647 | 0.510960471265 | 0.628058018099 | 0.242536067696 | -0.0654109751045 | -0.142521455703 | -0.196611863571 | -0.107913297073 |
0.096906385715 | 0.532064379223 | 0.229913654058 | 0.211753629516 | 0.266525126236 | 0.13675618206 | 0.314620778341 | 0.444440582128 | 0.424969482179 |
-0.0612538812664 | 0.232108208041 | -0.138898404449 | -0.265645889929 | 0.144925494403 | 0.240421047963 | 0.546147168984 | 0.767374200391 | 0.663709334488 |
-0.0646770216648 | 0.335281153514 | -0.14564054552 | -0.301406070826 | 0.202756409842 | 0.305726121021 | 0.694893368967 | 0.976611213875 | 0.848749689135 |
-0.0430820430074 | 0.25390566477 | -0.0966669539764 | -0.207858206667 | 0.15191339999 | 0.221227031407 | 0.502944876315 | 0.706911627157 | 0.615504399537 |
Now if we take the first and fourth rows which belong to "human" and "user", we get these:
0.162057973899 0.400498283106 0.378954540321 0.467566261179 0.175953674216 -0.05265494658 -0.115142842816 -0.159101981799 -0.0918382678755
0.258049326846 0.841123434512 0.60571994881 0.697357170797 0.392317949475 0.0331180051639 0.0832449070523 0.121772385778 0.187379725005
These rows are much more similar than they were before and they are more similar than other rows in the matrix.
This is called a rank reduction because the effect of replacing the values with zeros was that you get a matrix with a smaller "rank" which is just a numeric property of matrices.
Dimension reduction
The problem with a rank reduction is that suddenly the matrix becomes full of numbers where it used to be full of zeros. This means that in order to use the matrix you need a lot of processing which is slow. So a slight modification is used instead which has the effect of making the matrix smaller, whilst still exposing the latent co-occurrences of the matrix. After zeroing out the values, rather than multiplying all 3 matrices, only U and S or S and T are multiplied together, depending on whether you want to compare rows or columns.S[2][2] = 0.0 S[3][3] = 0.0 S[4][4] = 0.0 S[5][5] = 0.0 S[6][6] = 0.0 S[7][7] = 0.0 S[8][8] = 0.0 A = U*S print A
-0.739507219222 | -0.287668746646 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
-0.660310310378 | -0.183225579361 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
-0.803383071215 | 0.109679359776 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
-1.34837688543 | 0.145055532965 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
-2.15313661085 | -0.425229641788 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
-0.885459377345 | 0.272367594554 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
-0.885459377345 | 0.272367594554 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
-1.00503192501 | -0.359067290462 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
-0.687947636947 | 0.695529949191 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
-0.0425835158144 | 1.24584471806 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
-0.120725670868 | 1.58293385344 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
-0.106094203362 | 1.14505897084 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
The columns of zeros on the right of the matrix correspond to the values which were zeroed out in the diagonal of S. We can simply remove these columns.
-0.739507219222 | -0.287668746646 |
-0.660310310378 | -0.183225579361 |
-0.803383071215 | 0.109679359776 |
-1.34837688543 | 0.145055532965 |
-2.15313661085 | -0.425229641788 |
-0.885459377345 | 0.272367594554 |
-0.885459377345 | 0.272367594554 |
-1.00503192501 | -0.359067290462 |
-0.687947636947 | 0.695529949191 |
-0.0425835158144 | 1.24584471806 |
-0.120725670868 | 1.58293385344 |
-0.106094203362 | 1.14505897084 |
Now we only have only two columns to compare. The rows corresponding to "human" and "user" are:
-0.739507219222 -0.287668746646
-1.34837688543 0.145055532965
This is called a dimension reduction because the vectors being compared will become smaller, which allows for faster computation. The same thing could have been done to compare documents by multiplying S and V instead and then removing the bottom rows instead of the rows on the right.
No comments:
Post a Comment