Skip to content

Commit

Permalink
deploy: 8fa35b5
Browse files Browse the repository at this point in the history
  • Loading branch information
facebook-github-bot committed Dec 26, 2024
1 parent 9846117 commit ec334e2
Show file tree
Hide file tree
Showing 5 changed files with 39 additions and 15 deletions.
32 changes: 20 additions & 12 deletions _modules/xformers/ops/fmha/attn_bias.html
Original file line number Diff line number Diff line change
Expand Up @@ -1812,18 +1812,8 @@ <h1>Source code for xformers.ops.fmha.attn_bias</h1><div class="highlight"><pre>
<span class="n">_subtensor</span><span class="p">:</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span>

<span class="nd">@staticmethod</span>
<span class="k">def</span> <span class="fm">__new__</span><span class="p">(</span><span class="bp">cls</span><span class="p">,</span> <span class="o">*</span><span class="p">,</span> <span class="n">_subtensor</span><span class="o">=</span><span class="kc">None</span><span class="p">):</span>
<span class="k">if</span> <span class="n">_subtensor</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">_subtensor</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">empty</span><span class="p">((</span><span class="mi">0</span><span class="p">,),</span> <span class="n">device</span><span class="o">=</span><span class="n">_get_default_bias_device</span><span class="p">())</span>
<span class="n">tensor</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="o">.</span><span class="n">_make_wrapper_subclass</span><span class="p">(</span> <span class="c1"># type: ignore[attr-defined]</span>
<span class="bp">cls</span><span class="p">,</span>
<span class="p">[],</span>
<span class="n">device</span><span class="o">=</span><span class="n">_subtensor</span><span class="o">.</span><span class="n">device</span><span class="p">,</span>
<span class="n">dtype</span><span class="o">=</span><span class="n">_subtensor</span><span class="o">.</span><span class="n">dtype</span><span class="p">,</span>
<span class="n">requires_grad</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span>
<span class="p">)</span>
<span class="n">tensor</span><span class="o">.</span><span class="n">_subtensor</span> <span class="o">=</span> <span class="n">_subtensor</span>
<span class="k">return</span> <span class="n">tensor</span>
<span class="k">def</span> <span class="fm">__new__</span><span class="p">(</span><span class="bp">cls</span><span class="p">,</span> <span class="o">*</span><span class="p">,</span> <span class="n">_subtensor</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">device</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="k">raise</span> <span class="ne">NotImplementedError</span><span class="p">()</span>

<span class="k">def</span> <span class="fm">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="o">*</span><span class="n">args</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="kc">None</span><span class="p">:</span>
<span class="nb">super</span><span class="p">()</span><span class="o">.</span><span class="fm">__init__</span><span class="p">()</span>
Expand Down Expand Up @@ -1890,6 +1880,24 @@ <h1>Source code for xformers.ops.fmha.attn_bias</h1><div class="highlight"><pre>

<span class="n">HOLDS_DENSE_TENSOR</span> <span class="o">=</span> <span class="kc">False</span>

<div class="viewcode-block" id="LowerTriangularMask.__new__"><a class="viewcode-back" href="../../../../components/ops.html#xformers.ops.fmha.attn_bias.LowerTriangularMask.__new__">[docs]</a> <span class="nd">@staticmethod</span>
<span class="k">def</span> <span class="fm">__new__</span><span class="p">(</span><span class="bp">cls</span><span class="p">,</span> <span class="o">*</span><span class="p">,</span> <span class="n">_subtensor</span><span class="o">=</span><span class="kc">None</span><span class="p">,</span> <span class="n">device</span><span class="o">=</span><span class="s2">&quot;cpu&quot;</span><span class="p">,</span> <span class="o">**</span><span class="n">kwargs</span><span class="p">):</span>
<span class="w"> </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd"> Note: create on CPU by default to avoid initializing CUDA context</span>
<span class="sd"> by mistake.</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="k">if</span> <span class="n">_subtensor</span> <span class="ow">is</span> <span class="kc">None</span><span class="p">:</span>
<span class="n">_subtensor</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">empty</span><span class="p">((</span><span class="mi">0</span><span class="p">,),</span> <span class="n">device</span><span class="o">=</span><span class="n">device</span><span class="p">)</span>
<span class="n">tensor</span> <span class="o">=</span> <span class="n">torch</span><span class="o">.</span><span class="n">Tensor</span><span class="o">.</span><span class="n">_make_wrapper_subclass</span><span class="p">(</span> <span class="c1"># type: ignore[attr-defined]</span>
<span class="bp">cls</span><span class="p">,</span>
<span class="p">[],</span>
<span class="n">device</span><span class="o">=</span><span class="n">_subtensor</span><span class="o">.</span><span class="n">device</span><span class="p">,</span>
<span class="n">dtype</span><span class="o">=</span><span class="n">_subtensor</span><span class="o">.</span><span class="n">dtype</span><span class="p">,</span>
<span class="n">requires_grad</span><span class="o">=</span><span class="kc">False</span><span class="p">,</span>
<span class="p">)</span>
<span class="n">tensor</span><span class="o">.</span><span class="n">_subtensor</span> <span class="o">=</span> <span class="n">_subtensor</span>
<span class="k">return</span> <span class="n">tensor</span></div>

<span class="k">def</span> <span class="nf">materialize</span><span class="p">(</span>
<span class="bp">self</span><span class="p">,</span>
<span class="n">shape</span><span class="p">:</span> <span class="n">Tuple</span><span class="p">[</span><span class="nb">int</span><span class="p">,</span> <span class="o">...</span><span class="p">],</span>
Expand Down
9 changes: 8 additions & 1 deletion components/ops.html
Original file line number Diff line number Diff line change
Expand Up @@ -952,13 +952,20 @@ <h1>xFormers optimized operators<a class="headerlink" href="#xformers-optimized-

<dl class="py class">
<dt class="sig sig-object py" id="xformers.ops.fmha.attn_bias.LowerTriangularMask">
<em class="property"><span class="pre">class</span><span class="w"> </span></em><span class="sig-prename descclassname"><span class="pre">xformers.ops.fmha.attn_bias.</span></span><span class="sig-name descname"><span class="pre">LowerTriangularMask</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="o"><span class="pre">*</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">_subtensor</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/xformers/ops/fmha/attn_bias.html#LowerTriangularMask"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#xformers.ops.fmha.attn_bias.LowerTriangularMask" title="Permalink to this definition"></a></dt>
<em class="property"><span class="pre">class</span><span class="w"> </span></em><span class="sig-prename descclassname"><span class="pre">xformers.ops.fmha.attn_bias.</span></span><span class="sig-name descname"><span class="pre">LowerTriangularMask</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="o"><span class="pre">*</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">_subtensor</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">device</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">'cpu'</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">**</span></span><span class="n"><span class="pre">kwargs</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/xformers/ops/fmha/attn_bias.html#LowerTriangularMask"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#xformers.ops.fmha.attn_bias.LowerTriangularMask" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <code class="xref py py-class docutils literal notranslate"><span class="pre">AttentionBiasSubTensor</span></code></p>
<p>A lower-triangular (aka causal) mask</p>
<p>A query Q cannot attend to a key which is farther from the
initial key than Q is from the initial query.</p>
<p>See also <a class="reference internal" href="#xformers.ops.fmha.attn_bias.LowerTriangularFromBottomRightMask" title="xformers.ops.fmha.attn_bias.LowerTriangularFromBottomRightMask"><code class="xref py py-attr docutils literal notranslate"><span class="pre">LowerTriangularFromBottomRightMask</span></code></a> if the number
of queries is not equal to the number of keys/values.</p>
<dl class="py method">
<dt class="sig sig-object py" id="xformers.ops.fmha.attn_bias.LowerTriangularMask.__new__">
<em class="property"><span class="pre">static</span><span class="w"> </span></em><span class="sig-name descname"><span class="pre">__new__</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">cls</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">*</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">_subtensor</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">device</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">'cpu'</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">**</span></span><span class="n"><span class="pre">kwargs</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="../_modules/xformers/ops/fmha/attn_bias.html#LowerTriangularMask.__new__"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#xformers.ops.fmha.attn_bias.LowerTriangularMask.__new__" title="Permalink to this definition"></a></dt>
<dd><p>Note: create on CPU by default to avoid initializing CUDA context
by mistake.</p>
</dd></dl>

<dl class="py method">
<dt class="sig sig-object py" id="xformers.ops.fmha.attn_bias.LowerTriangularMask.add_bias">
<span class="sig-name descname"><span class="pre">add_bias</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">bias</span></span><span class="p"><span class="pre">:</span></span><span class="w"> </span><span class="n"><span class="pre">Tensor</span></span></em><span class="sig-paren">)</span> <span class="sig-return"><span class="sig-return-icon">&#x2192;</span> <span class="sig-return-typehint"><a class="reference internal" href="#xformers.ops.fmha.attn_bias.LowerTriangularMaskWithTensorBias" title="xformers.ops.fmha.attn_bias.LowerTriangularMaskWithTensorBias"><span class="pre">LowerTriangularMaskWithTensorBias</span></a></span></span><a class="reference internal" href="../_modules/xformers/ops/fmha/attn_bias.html#LowerTriangularMask.add_bias"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#xformers.ops.fmha.attn_bias.LowerTriangularMask.add_bias" title="Permalink to this definition"></a></dt>
Expand Down
11 changes: 10 additions & 1 deletion genindex.html
Original file line number Diff line number Diff line change
Expand Up @@ -222,7 +222,8 @@
<h1 id="index">Index</h1>

<div class="genindex-jumpbox">
<a href="#A"><strong>A</strong></a>
<a href="#_"><strong>_</strong></a>
| <a href="#A"><strong>A</strong></a>
| <a href="#B"><strong>B</strong></a>
| <a href="#D"><strong>D</strong></a>
| <a href="#F"><strong>F</strong></a>
Expand All @@ -237,6 +238,14 @@ <h1 id="index">Index</h1>
| <a href="#X"><strong>X</strong></a>

</div>
<h2 id="_">_</h2>
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
<li><a href="components/ops.html#xformers.ops.fmha.attn_bias.LowerTriangularMask.__new__">__new__() (xformers.ops.fmha.attn_bias.LowerTriangularMask static method)</a>
</li>
</ul></td>
</tr></table>

<h2 id="A">A</h2>
<table style="width: 100%" class="indextable genindextable"><tr>
<td style="width: 33%; vertical-align: top;"><ul>
Expand Down
Binary file modified objects.inv
Binary file not shown.
Loading

0 comments on commit ec334e2

Please sign in to comment.