Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

switch template #423

Open
wants to merge 14 commits into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
Binary file added .DS_Store
Binary file not shown.
20 changes: 0 additions & 20 deletions .eslintrc

This file was deleted.

191 changes: 0 additions & 191 deletions .gitignore

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
I"�{"source"=>"/Users/xiaopel/Github/Jiqing1107.github.io", "destination"=>"/Users/xiaopel/Github/Jiqing1107.github.io/_site", "collections_dir"=>"", "cache_dir"=>".jekyll-cache", "plugins_dir"=>"_plugins", "layouts_dir"=>"_layouts", "data_dir"=>"_data", "includes_dir"=>"_includes", "collections"=>{"posts"=>{"output"=>true, "permalink"=>"blog/:year/:month/:day/:title"}}, "safe"=>false, "include"=>[".htaccess"], "exclude"=>["Makefile", "README.md", ".sass-cache", ".jekyll-cache", "gemfiles", "Gemfile", "Gemfile.lock", "node_modules", "vendor/bundle/", "vendor/cache/", "vendor/gems/", "vendor/ruby/"], "keep_files"=>[".git", ".svn"], "encoding"=>"utf-8", "markdown_ext"=>"markdown,mkdown,mkdn,mkd,md", "strict_front_matter"=>false, "show_drafts"=>nil, "limit_posts"=>0, "future"=>false, "unpublished"=>false, "whitelist"=>[], "plugins"=>[], "markdown"=>"kramdown", "highlighter"=>"rouge", "lsi"=>false, "excerpt_separator"=>"\n\n", "incremental"=>false, "detach"=>false, "port"=>"4000", "host"=>"127.0.0.1", "baseurl"=>"", "show_dir_listing"=>false, "permalink"=>"blog/:year/:month/:day/:title", "paginate_path"=>"/page:num", "timezone"=>nil, "quiet"=>false, "verbose"=>false, "defaults"=>[], "liquid"=>{"error_mode"=>"warn", "strict_filters"=>false, "strict_variables"=>false}, "kramdown"=>{"auto_ids"=>true, "toc_levels"=>[1, 2, 3, 4, 5, 6], "entity_output"=>"as_char", "smart_quotes"=>"lsquo,rsquo,ldquo,rdquo", "input"=>"GFM", "hard_wrap"=>false, "guess_lang"=>true, "footnote_nr"=>1, "show_warnings"=>false}, "title"=>"Xiaopeng LI", "description"=>"", "sourcecode"=>"https://github.com/gchauras/much-worse-jekyll-theme", "url"=>"http://localhost:4000", "author"=>{"name"=>nil, "facebook"=>nil, "scholar"=>nil}, "analytics"=>{"provider"=>nil, "statcounter"=>{"sc_project"=>nil, "sc_security"=>nil, "sc_invisible"=>1, "sc_text"=>2}, "google"=>{"tracking_id"=>""}, "getclicky"=>{"site_id"=>nil}, "mixpanel"=>{"token"=>""}, "piwik"=>{"baseURL"=>"", "idsite"=>""}}, "comments"=>{"provider"=>nil, "disqus"=>{"short_name"=>nil}, "livefyre"=>{"site_id"=>nil}, "intensedebate"=>{"account"=>nil}, "facebook"=>{"appid"=>nil, "num_posts"=>5, "width"=>580, "colorscheme"=>"light"}}, "watch"=>true, "livereload_port"=>35729, "serving"=>true}:ET
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
I"w<h2 id="publications">Publications</h2>

<p><a href=""><strong>Learning Latent Superstructures in Variational Autoencoders for Deep Multidimensional Clustering</strong></a><br />
Xiaopeng Li, Zhourong Chen and Nevin L. Zhang<br />
<em>International Conference on Learning Representations</em>
<em>2019</em>
<br />Media: [<a href="https://arxiv.org/abs/1803.05206">arXiv</a>]</p>

<p><a href=""><strong>Building Sparse Deep Feedforward Networks using Tree Receptive Fields</strong></a><br />
Xiaopeng Li, Zhourong Chen and Nevin L. Zhang<br />
<em>International Joint Conference on Artificial Intelligence</em>
<em>2018</em>
<br />Media: [<a href="https://arxiv.org/abs/1803.05209">arXiv</a>][<a href="https://github.com/eelxpeng/TreeReceptiveFields">github</a>]</p>

<p><a href=""><strong>Learning Sparse Deep Feedforward Networks via Tree Skeleton Expansion</strong></a><br />
Zhourong Chen, Xiaopeng Li and Nevin L. Zhang<br />
<em>arXiv</em>
<em>2018</em>
<br />Media: [<a href="http://arxiv.org/abs/1803.06120">arXiv</a>]</p>

<p><a href=""><strong>Relational Variational Autoencoder for Link Prediction with Multimedia Data</strong></a><br />
X. Li and J. She<br />
<em>ACM SIGMM International Conference on Multimedia Thematic Workshop</em>
<em>2017</em>
<br />Media: [<a href="">paper</a>][<a href="https://github.com/eelxpeng/RVAE">github</a>]</p>

<p><a href=""><strong>Collaborative Variational Autoencoder for Recommender Systems</strong></a><br />
X. Li and J. She<br />
<em>ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</em>
<em>2017</em>
<br />Media: [<a href="/assets/paper/Collaborative_Variational_Autoencoder.pdf">paper</a>][<a href="https://github.com/eelxpeng/CollaborativeVAE">github</a>]</p>

<p><a href=""><strong>A Bayesian Neural Network for Deep Learning in Mobile Multimedia using Small Data</strong></a><br />
X. Li, J. She and M. Cheung<br />
<em>Submitted to ACM Trans. Multimedia Comput. Commun. Appl. (Under Review)</em>
<em>2016</em>
<br />Media: [<a href="">paper</a>]</p>

<p><a href=""><strong>Connection Discovery using Shared Images by Gaussian Relational Topic Model</strong></a><br />
X. Li, M. Cheung and J. She<br />
<em>IEEE International Conference on Big Data</em>
<em>2016</em>
<br />Media: [<a href="/assets/paper/GRTM.pdf">paper</a>][<a href="https://github.com/eelxpeng/GRTM">github</a>]</p>

<p><a href=""><strong>A Distributed Streaming Framework for Connection Discovery Using Shared Videos</strong></a><br />
X. Li, M. Cheung and J. She<br />
<em>ACM Trans. Multimedia Comput. Commun. Appl.</em>
<em>Sep. 18, 2017</em>
<br />Media: [<a href="">paper</a>]</p>

<p><a href=""><strong>An Efficient Computation Framework for Connection Discovery using Shared Images</strong></a><br />
M. Cheung, X. Li and J. She<br />
<em>ACM Trans. Multimedia Comput. Commun. Appl.</em>
<em>Aug. 29, 2017</em>
<br />Media: [<a href="">paper</a>]</p>

<p><a href=""><strong>Dance Background Image Recommendation with Deep Matrix Factorization</strong></a><br />
J. Wen, J. She, X. Li and H. Mao<br />
<em>ACM Trans. Multimedia Comput. Commun. Appl.</em>
<em>2018</em>
<br />Media: [<a href="">paper</a>]</p>

<p><a href=""><strong>Visual Background Recommendation for Dance Performances Using Dancer-Shared Images</strong></a><br />
J. Wen, X. Li, J. She, S. Park and M. Cheung<br />
<em>IEEE International Conference on Cyber Physical and Social Computing</em>
<em>2016</em>
<br />Media: [<a href="/assets/paper/Visual_Background_Recommendation_for_Dance_Performances_Using_Dancer-Shared_Images.pdf">paper</a>]</p>

<p><a href=""><strong>Non-user Generated Annotation on User Shared Images for Connection Discovery</strong></a><br />
M. Cheung, J. She and X. Li<br />
<em>IEEE International Conference on Cyber Physical and Social Computing</em>
<em>2015</em>
<br />Media: [<a href="http://ieeexplore.ieee.org/document/7396504/?arnumber=7396504&amp;tag=1">paper</a>]</p>

:ET
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
I"<p>During my research of Bayesian Deep Models (integration of Bayesian graphical models with deep learning models), I found several handy tricks when dealing with sigmoid functions. Here, I summarize several for future use and also for other researchers who might find it useful.</p>
:ET
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
I"�<div class="captioned-img alignright">
<a href="images/photo.jpg">

<img src="images/photo.jpg" width="300em" />
</a>
</div>

<p>Jiqing Wen is currently doing her PhD at Arizona State University. He works in the area of machine learning and artificial intelligence. His current research interests include deep learning, Bayesian networks and graphical models, Bayesian deep learning, and their application in computer vision, natural language processing, and recommender systems.</p>

<h2 id="contact">Contact</h2>

<p>Department of Computer Science and Engineering <br />
The Hong Kong University of Science and Technology <br />
Kowloon, Hong Kong<br />
Email: <a href="mailto:[email protected]">[email protected]</a></p>

:ET
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
I"e<p>During my research of Bayesian Deep Models (integration of Bayesian graphical models with deep learning models), I found several handy tricks when dealing with sigmoid functions. Here, I summarize several for future use and also for other researchers who might find it useful.</p>

<h3 id="variational-lower-bound-on-sigmoid-sigmax">Variational Lower Bound on Sigmoid $\sigma(x)$</h3>

<h3 id="expectation-of-sigmoid-function-with-normal-distribution">Expectation of Sigmoid function with Normal distribution</h3>
<p>Consider the following logistic-normal integral:</p>

\[g=\int_{-\infty}^{\infty} \sigma(x)\mathcal{N}(x|\mu, \sigma^2) dx = \int_{-\infty}^{\infty} \frac{1}{1+e^{-x}} \frac{1}{\sigma \sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}} dx.\]

<p>The logistic-normal integral does not have analytic expression. However, for mathmatical simplicity, we can approximate the expectation. In the end, we will demonstrate that the integral is approximately a reparameterized logistic function.</p>

<p>First, we can approximate the sigmoid function with a probit function.</p>

\[\sigma(x)\approx \Phi(\xi x), \text{where } \Phi(x)=\int_{-\infty}^x \mathcal{N}(\theta|0,1)d\theta, \text{and } \xi^2=\frac{\pi}{8}\]

<p>A little fact is that the probit-normal integral is just another probit function:</p>

\[\int \Phi(x) \mathcal{N}(x|\mu,\sigma^2) dx = \Phi(\frac{\mu}{\sqrt{1+\sigma^2}})\]

<p>Therefore,</p>

\[g\approx \int_{-\infty}^{\infty} \Phi(\xi x)\mathcal{N}(\mu, \sigma^2) dx = \Phi(\frac{\xi \mu}{\sqrt{1+\xi^2\sigma^2}})\approx \sigma(\frac{\mu}{\sqrt{1+\xi^2\sigma^2}}) = \sigma(\frac{\mu}{\sqrt{1+\pi\sigma^2/8}})\]

<p>It actually means, given a normally distributed random variable $x$, the sigmoid of $x$ is approximately the sigmoid of $\mathbb{E}[x]$ with some adjustment by the variance.</p>

<h3 id="some-others">Some others</h3>
<p>\(\tanh(x)=2\sigma(2x)-1\)</p>
:ET
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
I"�<p>You’ll find this post in your <code class="language-plaintext highlighter-rouge">_posts</code> directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run <code class="language-plaintext highlighter-rouge">jekyll serve --watch</code>, which launches a web server and auto-regenerates your site when a file is updated.</p>
:ET
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
I"V<p>Lately, I’m trying to investigate Bayesian Deep Learning and seriously considering it to be my PhD topic. Combining Bayesian with Deep Learning is current hot topic and with the current development of stochastic gradient monte carlo, I think it’s time for Bayesian Deep Learning to fly. And I could probably benefit from it a lot.</p>
:ET
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
I"<p>My CV is below and can be downloaded via <a href="/images/cv.pdf">PDF version</a>.</p>

<iframe src="http://docs.google.com/viewer?url=http://eelxpeng.github.io/images/cv.pdf&amp;hl=en_US&amp;embedded=true" style="width:100%; height:800px; border:0;" scrolling="no"></iframe>

:ET
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
I"�<p>Variational Autoencoder (VAE) has been proposed for two years. During the past two years, some good papers related variational autoencoder come up time to time. And I think it is a good tool worth investigating. Recently, I decide to do something about collaborative recommendation with cross-modality multimedia content using Bayesian deep learning. I think VAE could be a good help. In this post, I’ll investigate and explain VAE in my way.</p>
:ET
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
I"�<p>Lately I’m dealing with Bayesian non-parametric in order for the praparation of my next paper. Therefore, I spent several days trying to learn and understand Dirichlet process. Dirichlet process is at first difficult to understand, mainly because it is very different from our previous parametric methods and it uses advanced mathmetical concepts. I struggled several days to finally understand Dirichlet process. Once you understand it, it becomes very easy.</p>
:ET
Loading