VSA

<!DOCTYPE html>
<!– saved from url=(0078)https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html –>
<html lang=”en” dir=”ltr” class=”client-nojs”><head><meta http-equiv=”Content-Type” content=”text/html; charset=UTF-8″><script src=”./vsa_files/analytics.js” type=”text/javascript”></script>
<script type=”text/javascript”>window.addEventListener(‘DOMContentLoaded’,function(){var v=archive_analytics.values;v.service=’wb’;v.server_name=’wwwb-app42.us.archive.org’;v.server_ms=299;archive_analytics.send_pageview({});});</script><script type=”text/javascript” src=”./vsa_files/playback.bundle.js” charset=”utf-8″></script>
<script type=”text/javascript” src=”./vsa_files/wombat.js” charset=”utf-8″></script>
<script type=”text/javascript”>
if (window._WBWombatInit) {
wbinfo = {}
wbinfo.url = “http://home.wlu.edu:80/~levys/vsa.html”;
wbinfo.timestamp = “20180131211313”;
wbinfo.request_ts = “20180131211313”;
wbinfo.prefix = “https://web.archive.org/web/”;
wbinfo.mod = “if_”;
wbinfo.is_framed = false;
wbinfo.is_live = false;
wbinfo.coll = “web”;
wbinfo.proxy_magic = “”;
wbinfo.static_prefix = “/_static/”;
wbinfo.enable_auto_fetch = true;
wbinfo.auto_fetch_worker_prefix = “https://web.archive.org/web/”;
wbinfo.wombat_ts = “20180131211313”;
wbinfo.wombat_sec = “1517433193”;
wbinfo.wombat_scheme = “https”;
wbinfo.wombat_host = “home.wlu.edu:80”;
wbinfo.ignore_prefixes = [“/__wb/”,
“/_static/”,
“/web/”,
“http://analytics.archive.org/”,
“https://analytics.archive.org/”,
“//analytics.archive.org/”,
“http://archive.org/”,
“https://archive.org/”,
“//archive.org/”,
“http://faq.web.archive.org/”,
“http://web.archive.org/”,
“https://web.archive.org/”
];
wbinfo.wombat_opts = {};
window._WBWombatInit(wbinfo);
}
__wm.init(“https://web.archive.org/web”);
</script>
<link rel=”stylesheet” type=”text/css” href=”./vsa_files/banner-styles.css”>
<link rel=”stylesheet” type=”text/css” href=”./vsa_files/iconochive.css”>
<!– End Wayback Rewrite JS Include –>
<!– BEGIN WAYBACK TOOLBAR INSERT –>
<style type=”text/css”>
body {
margin-top:0 !important;
padding-top:0 !important;
/*min-width:800px !important;*/
}
</style>
</head><body><div id=”wm-ipp-base” lang=”en” style=”display: block; direction: ltr;”>
</div><div id=”donato” style=”position:relative;width:100%;”>
<div id=”donato-base”>
<iframe id=”donato-if” src=”./vsa_files/donate.html” scrolling=”no” frameborder=”0″ style=”width:100%; height:100%”>
</iframe>
</div>
</div><script type=”text/javascript”>
__wm.bt(625,27,25,2,”web”,”http://home.wlu.edu/~levys/vsa.html”,”20180131211313″,1996,”/_static/”,[“/_static/css/banner-styles.css?v=HyR5oymJ”,”/_static/css/iconochive.css?v=qtvMKcIJ”]);
</script>
<!– END WAYBACK TOOLBAR INSERT –><h1>Vector Symbolic Architectures</h1>
<h2>Contents</h2>

<ul>
<li class=”toclevel-1″><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#Motivation”><span class=”tocnumber”>1</span> <span class=”toctext”>Motivation</span></a></li>
<li class=”toclevel-1″><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#Origins”><span class=”tocnumber”>2</span> <span class=”toctext”>Origins</span></a></li>
<li class=”toclevel-1″><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#Approaches”><span class=”tocnumber”>3</span> <span class=”toctext”>Approaches</span></a></li>
<li class=”toclevel-1″><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#Advantages”><span class=”tocnumber”>4</span> <span class=”toctext”>Advantages</span></a></li>
<li class=”toclevel-1″><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#Noise”><span class=”tocnumber”>5</span> <span class=”toctext”>Noise</span></a></li>
<li class=”toclevel-1″><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#Example”><span class=”tocnumber”>6</span> <span class=”toctext”>Example</span></a></li>
<li class=”toclevel-1″><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#Applications”><span class=”tocnumber”>7</span> <span class=”toctext”>Applications</span></a></li>
<li class=”toclevel-1 tocsection-1″><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#References”><span class=”tocnumber”>8</span> <span class=”toctext”>References</span></a></li>
</ul>

<p></p>
<h2><span class=”mw-headline” id=”Motivation”>Motivation</span></h2>
<p><b>Vector Symbolic Architecture(s)</b> (VSA) is a term coined by psychologist Ross W. Gayler <sup id=”cite_ref-1″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-1″><span>[</span>1<span>]</span></a></sup> to refer to a class of <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Connectionism” title=”Connectionism”>connectionist network</a> architectures developed since the mid 1990s. Such architectures were developed in an attempt to address the challenges to connectionism posed by <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Jerry_fodor” title=”Jerry fodor” class=”mw-redirect”>Jerry Fodor</a>, <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Zenon_Pylyshyn” title=”Zenon Pylyshyn”>Zenon Pylyshyn</a>, <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Jackendoff” title=”Jackendoff” class=”mw-redirect”>Ray Jackendoff</a>, and other researchers. These researchers have argued that connectionist networks are – either in principle or in practice – incapable of modeling crucial features <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Cognition” title=”Cognition”>cognition</a> (language and thought), such as <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Compositionality” title=”Compositionality” class=”mw-redirect”>compositionality</a> (the ability to put together complex thoughts or sentences from simpler thoughts or words) and systematicity (the fact that someone who understands the sentence <i>John kissed Mary</i> can also understand the sentence <i>Mary kissed John</i>).<sup id=”cite_ref-2″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-2″><span>[</span>2<span>]</span></a></sup> VSA addresses these challenges by providing a <i>binding</i> operator associating individual (<i>John</i>, <i>Mary</i>) with <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Thematic_relation” title=”Thematic relation”>roles</a> (<i>AGENT</i>, <i>PATIENT</i>) and a <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Superposition_principle” title=”Superposition principle”>superposition</a> operator that allows multiple associations to be composed into a coherent whole. The name <i>VSA</i> comes from the fact that vectors are the sole means of representing all entities (roles, fillers, compositions).</p>
<h2><span class=”mw-headline” id=”Origins”>Origins</span></h2>
<p>VSA originated in <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Paul_Smolensky” title=”Paul Smolensky”>Paul Smolensky</a>’s Tensor Product Model <sup id=”cite_ref-3″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-3″><span>[</span>3<span>]</span></a></sup>, which implemented binding via <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Outer_product” title=”Outer product”>outer product</a>. Although this approach worked in principle, it was subject to a combinatorial explosion problem: binding two vectors of <i>N</i> elements resulted in a square matrix of <i>N</i><sup>2</sup> elements; binding that matrix with another vector of <i>N</i> elements resulted in a cube of <i>N</i><sup>3</sup> elements, etc.</p>
<h2><span class=”mw-headline” id=”Approaches”>Approaches</span></h2>
<p>All VSA approaches implement composition as <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Vector_addition#Addition_and_subtraction” title=”Vector addition” class=”mw-redirect”>vector addition</a> and solve the combinatorial explosion problem by reducing the outer product back to a single vector of <i>N</i> dimensions. Individual approaches differ in the kind of reduction they perform and the contents of the vectors they use. Tony Plate’s Holographic Reduced Representations (HRR)<sup id=”cite_ref-4″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-4″><span>[</span>4<span>]</span></a></sup> use real-valued vectors and reduce the matrix through <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Circular_convolution” title=”Circular convolution”>circular convolution</a>. <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Pentti_Kanerva” title=”Pentti Kanerva”>Pentti Kanerva</a>’s Binary Spatter Codes (BSC)<sup id=”cite_ref-5″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-5″><span>[</span>5<span>]</span></a></sup> use binary (0/1) vectors and ignore the off-diagonal matrix elements, so that binding is implemented as elementwise (<a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Hadamard_product_(matrices)” title=”Hadamard product (matrices)”>Hadamard</a>) product ⊗. Ross Gayler’s Multiply / Add / Permute (MAP) architecture likewise ignores the non-diagonal elements but uses +1/-1 instead of 0/1, so that each vector is its own <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Multiplicative_inverse” title=”Multiplicative inverse”>multiplicative inverse</a>. This self-inverse property is convenient for recovering the elements of individual associations but necessitates some additional mechanism to ‘protect’ or quote associations embedded in others (<i>Bill knows that</i> [<i>John kissed Mary</i>]). Hence, MAP uses a <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Permutation” title=”Permutation”>permutation</a> operator for quoting/protecting. Permutation is also useful for encoding order or precedence: rather than simply associating some other item <i>A</i> with an item <i>B</i>, we can represent the fact that <i>A</i> comes before <i>B</i> by binding the vector for <i>A</i> with the permutation of the vector for <i>B</i>; i.e., <i>A</i>⊗P(<i>B</i>).</p>
<h2><span class=”mw-headline” id=”Advantages”>Advantages</span></h2>
<p>Having solved the problem of combinatorial explosion, VSA approaches are free to use vectors of arbitrarily large dimensionality (size); <i>N</i>=10,000 is typical. Such large vectors provide a number of desirable features; for example, the <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Vector_space” title=”Vector space”>space</a> of such vectors contains on the order of 2<sup><i>N</i></sup> nearly-orthogonal vectors <sup id=”cite_ref-6″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-6″><span>[</span>6<span>]</span></a></sup> , and each such vector can be degraded by up to 30% and still be closer to its original form than to any of the other vectors in the space.<sup id=”cite_ref-7″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-7″><span>[</span>7<span>]</span></a></sup> As with ordinary arithmetic, vector multiplication and addition are <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Associativity” title=”Associativity” class=”mw-redirect”>associative</a> and <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Commutativity” title=”Commutativity” class=”mw-redirect”>commuative</a>, and multiplication <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Distributive_property” title=”Distributive property”>distributes</a> over addition.</p>
<p>The use of large vectors also makes VSA a variety of <i>distributed representation</i>, where the identity of a symbol is encoded across all elements of the representation (vector), and each element the representation takes part in representing the symbol. Chris Eliasmith and his colleagues have argued that, compared to ‘localist’ representations (one element per symbol), such representations are more capable of scaling up to nontrivial problems. <sup id=”cite_ref-8″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-8″><span>[</span>8<span>]</span></a></sup> Stewart, Bekolay, and Eliasmith have also provided an implemention of HRR in spiking neurons , which they embed in an architecture that
uses them to do processing,<sup id=”cite_ref-9″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-9″><span>[</span>9<span>]</span></a></sup>

adding to the plausibility of VSA as a model of cognition in humans and other animals.</p>
<p>Unlike many traditional neural networks, VSAs do not rely on <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Backpropagation” title=”Backpropagation”>backpropagation</a> or other compute-intensive learning algorithms, and, as shown in the example below, can often induce a pattern from a single example. Further, elementwise multiplication and addition are <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Embarrassingly_parallel” title=”Embarrassingly parallel”>embarrassingly parallel</a> operations that can be performed efficiently (in principle, in constant time).</p>
<h2><span class=”mw-headline” id=”Noise”>Noise</span></h2>
<p>Because they use finite dimensionality and finite precision, all VSA approaches can be considered a variety of <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Lossy_compression” title=”Lossy compression”>lossy (noisy) compression</a>. To recover the original vectors in the presence of noise, VSA employs a <i>cleanup memory</i> that stores the vector representations of the items of interest (<i>Mary</i>, <i>AGENT</i>, etc.) Cleanup memory can be implemented as a simple list of items, as a <a href=”https://web.archive.org/web/20180131211313/https://en.wikipedia.org/wiki/Hopfield_network” title=”Hopfield network”>Hopfield Network</a>, or in a spiking-neuron model.<sup id=”cite_ref-15″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-15″><span>[</span>15<span>]</span></a></sup></p>
<h2><span class=”mw-headline” id=”Example”>Example</span></h2>
<p>This example, due to Pentti Kanerva<sup id=”cite_ref-10″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-10″><span>[</span>10<span>]</span></a></sup>, shows how the MAP variety of VSA can be used to perform analogical reasoning.</p>
<p>Consider the knowledge we have about a particular country: for example, the name of our country is <i>United States of America</i>, and our capital is Washington DC; we might say that Washington DC fills the <i>CAPITAL</i> role for the US. Our monetary unit is the dollar. The name of our neighbor to the south is <i>Mexico</i>; its capital is Mexico City, and its currency is the peso. Using the notation &lt;<i>X</i>&gt; to mean <i>the vector representation of X</i>, we can represent this knowledge as follows:</p>
<p><b><i>V</i><sub>1</sub> = [&lt;NAME&gt;⊗&lt;USA&gt; + &lt;CAP&gt;⊗&lt;WDC&gt; + &lt;MON&gt;⊗&lt;DOL&gt; ]</b></p>
<p><b><i>V</i><sub>2</sub> = [&lt;NAME&gt;⊗&lt;MEX&gt; + &lt;CAP&gt;⊗&lt;MXC&gt; + &lt;MON&gt;⊗&lt;PES&gt; ]</b></p>
<p>To compare the United States to Mexico, we can take the vector product <b><i>V</i><sub>1</sub> ⊗<i>V</i><sub>2</sub></b> of these representations, which is itself just another vector:</p>
<p><b>[&lt;NAME&gt;⊗&lt;USA&gt; + &lt;CAP&gt;⊗&lt;WDC&gt; + &lt;MON&gt;⊗&lt;DOL&gt; ] ⊗<br>
[&lt;NAME&gt;⊗&lt;MEX&gt; + &lt;CAP&gt;⊗&lt;MXC&gt; + &lt;MON&gt;⊗&lt;PES&gt; ]</b></p>
<p>Multiplying out, we see that this vector is equal to the vector</p>
<p><b>&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt; +<br>
&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt; +<br>
&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt; +<br>
&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;MON&gt;⊗&lt;PES&gt;</b></p>
<p>Thanks to associativity and commutativity, we can rewrite this expression as</p>
<p><b>&lt;NAME&gt;⊗&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;MEX&gt; +<br>
&lt;CAP&gt;⊗&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;MXC&gt; +<br>
&lt;MON&gt;⊗&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;PES&gt; +<br>
&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt; +<br>
&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt;</b></p>
<p>Because of the self-canceling property of MAP, this expression can be rewritten as</p>
<p><b>&lt;USA&gt;⊗&lt;MEX&gt; +<br>
&lt;WDC&gt;⊗&lt;MXC&gt; +<br>
&lt;DOL&gt;⊗&lt;PES&gt; +<br>
&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt; +<br>
&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt;</b></p>
<p>To ask ‘<i>What is the ‘dollar of Mexico’?</i> we multiply this vector by <b>&lt;DOL&gt;</b>, our VSA representation of dollar:</p>
<p><b>&lt;DOL&gt; ⊗<br>
[&lt;USA&gt;⊗&lt;MEX&gt; +<br>
&lt;WDC&gt;⊗&lt;MXC&gt; +<br>
&lt;DOL&gt;⊗&lt;PES&gt; +<br>
&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt; +<br>
&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt;]</b></p>
<p><b>=</b></p>
<p><b>[&lt;DOL&gt;⊗&lt;USA&gt;⊗&lt;MEX&gt; +<br>
&lt;DOL&gt;⊗&lt;WDC&gt;⊗&lt;MXC&gt; +<br>
&lt;DOL&gt;⊗&lt;DOL&gt;⊗&lt;PES&gt; +<br>
&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt; +<br>
&lt;DOL&gt;⊗&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;DOL&gt;⊗&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;DOL&gt;⊗&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;DOL&gt;⊗&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;DOL&gt;⊗&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt;]</b></p>
<p><b>=</b></p>
<p><b>[&lt;PES&gt; +<br>
&lt;DOL&gt;⊗&lt;USA&gt;⊗&lt;MEX&gt; +<br>
&lt;DOL&gt;⊗&lt;WDC&gt;⊗&lt;MXC&gt; +<br>
&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt; +<br>
&lt;DOL&gt;⊗&lt;NAME&gt;⊗&lt;USA&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;DOL&gt;⊗&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;DOL&gt;⊗&lt;CAP&gt;⊗&lt;WDC&gt;⊗&lt;MON&gt;⊗&lt;PES&gt; +<br>
&lt;DOL&gt;⊗&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;NAME&gt;⊗&lt;MEX&gt; +<br>
&lt;DOL&gt;⊗&lt;MON&gt;⊗&lt;DOL&gt;⊗&lt;CAP&gt;⊗&lt;MXC&gt;]</b></p>
<p>This vector will be highly correlated with the vector <b>&lt;PES&gt;</b> representing the peso, and will have a low correlation with any of the other vectors (<b>&lt;DOL&gt;</b>, <b>&lt;USA&gt;</b>, <b>&lt;NAME&gt;</b>, etc.) Hence we can rewrite the expression as</p>
<p>[<b>&lt;PES&gt; + <i>noise</i></b>]</p>
<p>where <b><i>noise</i></b> is a vector uncorrelated with any other vector in our model. If we like, we can get a perfect reproduction of the representation <b>&lt;PES&gt;</b> by passing this noisy vector through a cleanup memory. More important, though, is that we have correctly answered the question ‘<i>What is the dollar of Mexico?</i>’ with the answer <i>Peso</i>, <b>without the need for decomposing or extracting any of the components of the encoded representation</b>, as we would with a traditional computational model based on data structures and pointers.</p>
<h2><span class=”mw-headline” id=”Applications”>Applications</span></h2>
<p>Because of the generality of the role/filler mechanism, VSA can be applied to a variety of tasks. Recent applications have included solutions to intelligence tests<sup id=”cite_ref-11″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-11″><span>[</span>11<span>]</span></a></sup> and graph isomorphisms <sup id=”cite_ref-12″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-12″><span>[</span>12<span>]</span></a></sup>, as well as a behavior-based robotics <sup id=”cite_ref-13″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-13″><span>[</span>13<span>]</span></a></sup> and the representation of word meaning and order in natural language <sup id=”cite_ref-14″ class=”reference”><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_note-14″><span>[</span>14<span>]</span></a></sup>.</p>
<h2><span class=”mw-headline” id=”References”>References</span>

<span class=”mw-editsection”><span class=”mw-editsection-bracket”>

</span></span></h2>

<div class=”reflist” style=”list-style-type: decimal;”>
<ol class=”references”>
<li id=”cite_note-1″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-1″>^</a></b></span> <span class=”reference-text”>Ross W. Gayler (2003) Vector Symbolic Architectures Answer Jackendoff’s Challenges for Cognitive Neuroscience. In Slezak, P., ed.: <i>ICCS/ASCS International Conference on Cognitive Science</i>. CogPrints, Sydney, Australia, University of New South Wales, 133-138.</span></li>
<li id=”cite_note-2″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-2″>^</a></b></span> <span class=”reference-text”>Jerry A. Fodor and Zenon W. Pylyshyn (1988). Connectionism and Cognitive architecture: A Critical Analysis. <i>Cognition</i>, 28:1–2, 3-71.</span></li>
<li id=”cite_note-3″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-3″>^</a></b></span> <span class=”reference-text”>Paul Smolensky (1990). Tensor Product Variable Binding and the Representation of Symbolic Structures in Connectionist Systems. <i>Artificial Intelligence</i> 46, 159-216.</span></li>
<li id=”cite_note-4″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-4″>^</a></b></span> <span class=”reference-text”>Tony Plate (2005) <i>Holographic Reduced Representation: Distributed Representation for Cognitive Structures</i>.Stanford: CSLI Publications.</span></li>
<li id=”cite_note-5″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-5″>^</a></b></span> <span class=”reference-text”>Pentti Kanerva (1994) The Spatter Code for Encoding Concepts at Many Levels. In M. Marinaro and P.G. Morasso (eds.), <i>ICANN ’94, Proceedings of International Conference on Artificial Neural Networks</i> (Sorrento, Italy), vol. 1, 226-229. London: Springer-Verlag.</span></li>
<li id=”cite_note-6″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-6″>^</a></b></span> <span class=”reference-text”>Robert Hecht-Nielsen (1994) Context Vectors; General Purpose Approximate Meaning Representations Self-organized from Raw Data. In J.M. Zurada, R. J. Marks II and C. J. Robinson, <i>Computational Intelligence: Imitating Life</i>. IEEE Press.</span></li>
<li id=”cite_note-7″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-7″>^</a></b></span> <span class=”reference-text”>Pentti Kanerva (2009) Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors. <i>Cognitive Computation</i> 1(2), 139-159.</span></li>
<li id=”cite_note-8″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-8″>^</a></b></span> <span class=”reference-text”>Terry Stewart and Chris Eliasmith. (2011) Compositionality and Biologically Plausible Models. In W. Hinzen, E. Machery, and M. Werning (eds.) <i>Oxford Handbook of Compositionality</i>. Oxford University Press.</span></li>
<li id=”cite_note-9″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-9″>^</a></b></span> <span class=”reference-text”>Terry Stewart, Trevor Bekolay, and Chris Eliasmith (2011). Neural Representations of Compositional Structures: Representing and Manipulating Vector Spaces with Spiking Neurons. <i>Connection Science</i> 22(3), 145-153.</span></li>
<li id=”cite_note-10″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-10″>^</a></b></span> <span class=”reference-text”>Pentti Kanerva (2010) What We Mean When We Say “What’s the Dollar of Mexico?”: Prototypes and Mapping in Concept Space. In P D. Bruza, W. Lawless, K. van Rijsbergen, D. A. Sofge, and D. Widdows (eds.), <i>Quantum Informatics for Cognitive, Social, and Semantic Processes: Papers from the AAAI Symposium.</i></span></li>
<li id=”cite_note-11″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-11″>^</a></b></span> <span class=”reference-text”>Daniel Rasmussen and Chris Eliasmith (2011) A Neural Model of Rule Generation in Inductive Reasoning. <i>Topics in Cognitive Science</i> 3, 140–153</span></li>
<li id=”cite_note-12″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-12″>^</a></b></span> <span class=”reference-text”>Ross W. Gayler and Simon D. Levy (2009) A Distributed Basis for Analogical Mapping. <i>Proceedings of the Second International Analogy Conference</i>. NBU Press.</span></li>
<li id=”cite_note-13″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-13″>^</a></b></span> <span class=”reference-text”>Simon D. Levy, Suraj Bajracharya, and Ross W. Gayler (2013) Learning Behavior Hierarchies via High-Dimensional Sensor Projection. In <i>Learning Rich Representations from Low-Level Sensors: Papers from the 2013 AAAI Workshop</i>.</span></li>
<li id=”cite_note-14″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-14″>^</a></b></span> <span class=”reference-text”>Michael N. Jones and Douglas J. Mewhort (2007) Representing Word Meaning and Order Information in a Composite Holographic Lexicon. <i>Psychological Review</i> 114:1, 1-37.</span></li>

<li id=”cite_note-15″><span class=”mw-cite-backlink”><b><a href=”https://web.archive.org/web/20180131211313/http://home.wlu.edu/~levys/vsa.html#cite_ref-15″>^</a></b></span> <span class=”reference-text”>Terrence C. Stewart, Yichuan Tang, and Chris Eliasmith (2010) A Biologically Realistic Cleanup Memory: Autoassociation in Spiking Neurons.
<i>Cognitive Systems Research</i> 12, 84-92.</span></li>
</ol>

</div>

<!–
FILE ARCHIVED ON 21:13:13 Jan 31, 2018 AND RETRIEVED FROM THE
INTERNET ARCHIVE ON 05:26:35 Jun 12, 2020.
JAVASCRIPT APPENDED BY WAYBACK MACHINE, COPYRIGHT INTERNET ARCHIVE.

ALL OTHER CONTENT MAY ALSO BE PROTECTED BY COPYRIGHT (17 U.S.C.
SECTION 108(a)(3)).
–>
<!–
playback timings (ms):
LoadShardBlock: 129.764 (3)
PetaboxLoader3.resolve: 106.95 (2)
esindex: 0.017
exclusion.robots: 0.385
load_resource: 119.361
RedisCDXSource: 7.686
exclusion.robots.policy: 0.366
PetaboxLoader3.datanode: 104.059 (4)
CDXLines.iter: 12.539 (3)
captures_list: 153.64
–><iframe frameborder=”0″ scrolling=”no” style=”background-color: transparent; border: 0px; display: none;” src=”./vsa_files/saved_resource.html”></iframe><div id=”GOOGLE_INPUT_CHEXT_FLAG” input=”” input_stat=”{&quot;tlang&quot;:true,&quot;tsbc&quot;:true,&quot;pun&quot;:true,&quot;mk&quot;:true,&quot;ss&quot;:true}” style=”display: none;”></div></body></html>