-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.html
executable file
·1425 lines (1385 loc) · 107 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<link rel="shortcut icon" href="img/sheep1.jpeg">
<title>Yang Yang - Zhejiang University</title>
<!-- Bootstrap core CSS -->
<link href="dist/css/bootstrap.min.css" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="jumbotron.css" rel="stylesheet">
<!-- css for particles -->
<link href="dist/css/particles.css" rel="stylesheet">
<!-- Just for debugging purposes. Don't actually copy this line! -->
<!--[if lt IE 9]><script src="../../assets/js/ie8-responsive-file-warning.js"></script><![endif]-->
<!-- HTML5 shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body>
<div class="navbar navbar-inverse navbar-fixed-top" role="navigation">
<div class="container">
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<!-- <img height="50" src="img/sheep2.jpeg" align="left" hspace="6" style="margin-left:-6p;margin-right:20px"> -->
<a class="navbar-brand" href="#">Yang Yang (杨洋)</a>
</div>
<div class="navbar-collapse collapse">
<ul class="nav nav-pills pull-right">
<li class="active"><a href="#">Home</a></li>
<li><a href="#research">Research</a></li>
<li><a href="#publication">Publications</a></li>
<!-- <li><a href="#award">Awards</a></li> -->
<li><a href="data.html">Data</a></li>
</ul>
<!--
<form class="navbar-form navbar-right" role="form">
<div class="form-group">
<input type="text" placeholder="Email" class="form-control">
</div>
<div class="form-group">
<input type="password" placeholder="Password" class="form-control">
</div>
<button type="submit" class="btn btn-success">Sign in</button>
</form>
-->
</div><!--/.navbar-collapse -->
</div>
</div>
<!-- Main jumbotron for a primary marketing message or call to action -->
<div class="jumbotron">
<div class="container">
<img height="160" src="img/yang-sea.jpeg" align="left" hspace="6" style="margin-left:-6p;margin-right:20px">
<b> <font size="5"> Yang Yang 杨洋 </font> </b>
<p></p>
<p> <i>Associate Professor, Zhejiang University </i></p>
<p> <i> <b> Email:</b> yangya {at} zju [dot] edu [dot] cn</i> </p>
<p> <i><b> Office:</b> Room 415, CGB Building, Yuquan Campus </i> </p>
<div class="container">
<p></p>
<p> I am associate professor of <a target="_blank" href="http://www.cs.zju.edu.cn/">Computer Science and Technology</a> at <a target="_blank" href="http://www.zju.edu.cn/">Zhejiang University</a>, serving as head of Artificial Intelligence.
I received NSFC for Excellent Young Scholar.
I am also scientific advisor at <a target="_blank" href="https://ir.finvgroup.com/Home">FinVolution Group</a>.
My research interests focus on studying artificial intelligence in the context of large-scale graph and time series data.
I obtained my Ph.D. degree
from <a target="_blank" href="https://www.tsinghua.edu.cn/">Tsinghua University</a> in 2016, fortunately advised by <a target="_blank" href="http://keg.cs.tsinghua.edu.cn/jietang/">Jie Tang</a> and <a target="_blank" href="http://keg.cs.tsinghua.edu.cn/persons/ljz/">Juanzi Li</a>.
I have been visiting Cornell University (working with <a target="_blank" href="http://www.cs.cornell.edu/jeh/">John Hopcroft</a>) in 2012, and University of Leuven (working with <a target="_blank" href="http://people.cs.kuleuven.be/~sien.moens/">Marie-Francine Moens</a>)
in 2013.
During my Ph.D. career, I also have <a target="_blank" href="http://web.cs.ucla.edu/~yzsun/">Yizhou Sun</a> from UCLA as my research advisor.
Here is
<!-- For detailed personal information, please refer to -->
my <a target="_blank" href="works/cv/yangyang_cv_202308.pdf">CV</a>. </p>
<p><font color="red">
I am looking for <u>highly-motivated</u> students to work with me. </font> If interested, please drop me a message by email.
For students requesting reference letters from me, please ensure that we have collaborated for a minimum of six months to provide a comprehensive and useful assessment of your capabilities and contributions.
</p>
<!--<p> References for the Ph.D (M.S.) program: since December 2023, I write reference letters for students who have worked with me for at least 6 months, to make sure there is sufficiently useful information. </p>
-->
<!--
<p>Open research intern positions: <font color="red"> I am looking for <u> highly-motivated </u> students to work with me. </font> If interested, please drop me a message by email. </p>
-->
<div class="page-header">
<h4>What's New</h4>
<!--
<li style="margin:10px"> <b>We have a few open positions for <a target="_blank" href="postdoc.html">postdoctoral</a>. <a target="_blank" href="postdoc.html">Please read this for details.</a> </b> </li>
-->
<li style="margin:10px"> <b>[Oct. 2024]</b> "<a target="_blank" href="works/power/NeurIPS24_PowerPM.pdf">PowerPM: Foundation Model for Power Systems</a>" is accepted by NeurIPS.</li>
<li style="margin:10px"> <b>[Oct. 2024]</b> "<a target="_blank" href="works/brainnet/NeurIPS24_DMNet.pdf">DMNet: Self-comparison Driven Model for Subject-independent Seizure Detection</a>" is accepted by NeurIPS.</li>
<li style="margin:10px"> <b>[Oct. 2024]</b> "<a target="_blank" href="works/brainnet/NeurIPS24_Con4M.pdf">Con4m: Context-aware Consistency Learning Framework for Segmented Time Series Classification</a>" is accepted by NeurIPS.</li>
<li style="margin:10px"> <b>[Oct. 2024]</b> "<a target="_blank" href="works/gnn/NeurIPS24_Molextract.pdf">Extracting Training Data from Molecular Pre-trained Models</a>" is accepted by NeurIPS.</li>
<li style="margin:10px"> <b>[Oct. 2024]</b> "<a target="_blank" href="works/gnn/NeurIPS24_Attack.pdf">Towards More Efficient Property Inference Attacks on Graph Neural Networks</a>" is accepted by NeurIPS.</li>
<li style="margin:10px"> <b>[May. 2024]</b> "<a target="_blank" href="works/brainnet/KDD24_BrantX.pdf">Brant-X: A Unified Physiological Signal Alignment Framework</a>" is accepted by KDD.</li>
<li style="margin:10px"> <b>[May. 2024]</b> "<a target="_blank" href="works/gnn/KDD24_Adaptation.pdf"">Can Modifying Data Address Graph Domain Adaptation?</a>" is accepted by KDD.</li>
<li style="margin:10px"> <b>[May. 2024]</b> "<a target="_blank" href="works/gnn/KDD24_Privacy.pdf">Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data</a>" is accepted by KDD.</li>
<li style="margin:10px"> <b>[May. 2024]</b> "<a target="_blank" href="works/application/KDD24_Chromosome.pdf">Chromosomal Structural Abnormality Diagnosis by Homologous Similarity</a>" is accepted by KDD.</li>
<li style="margin:10px"> <b>[May. 2024]</b> "<a target="_blank" href="works/llm/An_expert_is_worth_one_token_ACL24.pdf">An Expert is Worth One Token: Synergizing Multiple Expert LLMs as Generalist via Expert Token Routing</a>" is accepted by ACL.</li>
<li style="margin:10px"> <b>[May. 2024]</b> "<a target="_blank" href="works/gnn/ICML2024_Correlation_SSL.pdf">Exploring Correlations of Self-supervised Tasks for Graphs</a>" is accepted by ICML.</li>
<li style="margin:10px"> <b>[May. 2024]</b> "<a target="_blank" href="works/llm/InfiAgent_DABench-0529.pdf">InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks</a>" is accepted by ICML.</li>
<li style="margin:10px"> <b>[Apr. 2024]</b> "<a target="_blank" href="works/domain/IJCAI24_DWLR.pdf">DWLR: Domain Adaptation under Label Shift for Wearable Sensor</a>" is accepted by IJCAI.</li>
<li style="margin:10px"> <b>[Apr. 2024]</b> "<a target="_blank" href="works/domain/IJCAI24_disentangling.pdf">Disentangling Domain and General Representations for Time Series Classification</a>" is accepted by IJCAI.</li>
<li style="margin:10px"> <b>[Jan. 2024]</b> "<a target="_blank" href="works/gnn/WWW24_GraphSkeleton.pdf">Graph-Skeleton: ∼1% Nodes are Sufficient to Represent
Billion-Scale Graph</a>" is accepted by WWW.</li>
<li style="margin:10px"> <b>[Jan. 2024]</b> "<a target="_blank" href="works/gnn/WWW24_GraphAdapter.pdf">Can GNN be Good Adapter for LLMs?</a>" is accepted by WWW.</li>
<li style="margin:10px"> <b>[Jan. 2024]</b> "<a target="_blank" href="works/damf/ICLR24_FastSVD.pdf">Fast Updating Truncated SVD for Representation Learning with Sparse Matrices</a>" is accepted by ICLR.</li>
<!--
<li style="margin:10px"> <b>[Dec. 2023]</b> "<a target="_blank" href="works/gnn/AAAI24_Tuning.pdf">Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns</a>" is accepted by AAAI.</li>
<li style="margin:10px"> <b>[Dec. 2023]</b> "<a target="_blank" href="works/gnn/AAAI24_Measuring.pdf">Measuring Task Similarity and Its Implication in Fine-Tuning Graph Neural Networks</a>" is accepted by AAAI.</li>
<li style="margin:10px"> <b>[Dec. 2023]</b> "<a target="_blank" href="works/gnn/AAAI24_Federated.pdf">Towards Fair Graph Federated Learning via Incentive Mechanisms</a>" is accepted by AAAI.</li>
<li style="margin:10px"> <b>[Oct. 2023]</b> "<a target="_blank" href="works/social/EMNLP23_MediaHG.pdf">MediaHG: Rethinking Eye-catchy Features in Social Media Headline
Generation</a>" is accepted by EMNLP.</li>
<li style="margin:10px"> <b>[Sep. 2023]</b> "<a target="_blank" href="works/gnn/NeurIPS23_Prompt.pdf">Universal Prompt Tuning for Graph Neural Networks</a>" is accepted by NeurIPS.</li>
<li style="margin:10px"> <b>[Sep. 2023]</b> "<a target="_blank" href="works/brainnet/NeurIPS23_Brant.pdf">Brant: Foundation Model for Intracranial Neural Signal</a>" is accepted by NeurIPS.</li>
<li style="margin:10px"> <b>[Sep. 2023]</b> "<a target="_blank" href="works/brainnet/NeurIPS23_PPi.pdf">PPi: Pretraining Brain Signal Model for Patient-independent Seizure Detection</a>" is accepted by NeurIPS.</li>
<li style="margin:10px"> <b>[Sep. 2023]</b> "<a target="_blank" href="works/gnn/NeurIPS23_Less.pdf">Better with Less: A Data-Centric Prespective on Pre-Training Graph Neural Networks</a>" is accepted by NeurIPS.</li>
<li style="margin:10px"> <b>[May. 2023]</b> "<a target="_blank" href="works/damf/KDD_23_Accelerating_Dynamic_Network_Embedding_with_Billions_of_Parameters_Update_to_Milleseconds.pdf">Accelerating Dynamic Network Embedding with Billions of Parameter Updates to Milliseconds</a>" is accepted by KDD.</li>
<li style="margin:10px"> <b>[May. 2023]</b> "<a target="_blank" href="works/brainnet/KDD23_MBrain.pdf">MBrain: A Multi-channel Self-Supervised Learning Framework for Brain Signals</a>" is accepted by KDD.</li>
<li style="margin:10px"> <b>[May. 2023]</b> "<a target="_blank" href="works/graph_pretrain/KDD23_When_to_Pre-Train.pdf">When to Pre-Train Graph Neural Networks? An Answer from Data Generation Perspective!</a>" is accepted by KDD.</li>
<li style="margin:10px"> <b>[Feb. 2023]</b> "<a target="_blank" href="works/gnn/AAAI23_DropMessage.pdf">DropMessage: Unifying Random Dropping for Graph Neural Networks</a>" is awarded the <b>AAAI 2023 Distinguished Paper Award.</b></li>
<li style="margin:10px"> <b>[Feb. 2023]</b> "<a target="_blank" href="works/covid/COVID_Recovery_2023.pdf">Unfolding and Modeling the Recovery Process after COVID Lockdowns</a>" is accepted by Nature Scientific Reports. </li>
<li style="margin:10px"> <b>[Dec. 2022]</b> "<a target="_blank" href="works/gnn/AAAI23_Laundering.pdf">Towards Learning to Discover Money Laundering Sub-network in Massive Transaction Network</a>" is accepted by AAAI. </li>
<li style="margin:10px"> <b>[Sep. 2022]</b> "<a target="_blank" href="works/dgraph/dgraph_2022.pdf">DGraph: A Large-Scale Financial Dataset for Graph Anomaly Detection</a>" is accepted by NeurIPS. </li>
<li style="margin:10px"> <b>[Jun. 2022]</b> We release <a href="https://dgraph.xinye.com/introduction"><b>DGraph</b></a>, a benchmark for dynamic graphs with labelled finicial frauds in real world. </li>
<li style="margin:10px"> <b>[May. 2022]</b> "<a target="_blank" href="works/brainnet/KDD22_BrainNet.pdf">BrainNet: Epileptic Wave Detection from SEEG with Hierarchical Graph Diffusion Learning</a>" is accepted by KDD. </li>
<li style="margin:10px"> <b>[Apr. 2022]</b> "<a target="_blank" href="works/gnn/IJCAI22_Abnormality.pdf">Can Abnormality be Detected by Graph Neural Networks?</a>" is accepted by IJCAI. </li>
<li style="margin:10px"> <b>[Apr. 2022]</b> "<a target="_blank" href="works/gnn/IJCAI22_Beyond.pdf">Beyond Homophily: Structure-aware Path Aggregation Graph Neural Network</a>" is accepted by IJCAI. </li>
<li style="margin:10px"> <b>[Feb. 2022]</b> "<a target="_blank" href="works/taocode/TKDE22_rising_star.pdf">Who's Next: Rising Star Prediction via Diffusion of User Interest in Social Networks</a>" is accepted by TKDE. </li>
<li style="margin:10px"> <b>[Dec. 2021]</b> "<a target="_blank" href=”https://arxiv.org/pdf/2012.02486.pdf“>Unsupervised Adversarially Robust Representation Learning on Graphs</a>" is accepted by AAAI. </li>
<li style="margin:10px"> <b>[Dec. 2021]</b> "<a target="_blank" href="https://arxiv.org/pdf/2012.06757.pdf">Blindfolded Attackers Still Threatning: Strict Black-Box Adversarial Attacks on Graphs</a>" is accepted by AAAI. </li>
<li style="margin:10px"> <b>[Oct. 2021]</b> <a target="_blank" href="https://galina0217.github.io/">Jiarong Xu</a> passes PhD defense and joins Fudan University as assistant professor. Congratulations Prof. Xu!</li>
<li style="margin:10px"> <b>[Jun. 2021]</b> Our paper "<a target="_blank" href="works/robust/tkde2021_netrl.pdf">NetRL: Task-aware Network Denoising via Deep Reinforcement Learning</a>" is accepted by TKDE. </li>
<li style="margin:10px"> <b>[May. 2021]</b> Our paper "<a href="works/t2g/time2graphplus_tkde21.pdf">Time2Graph+: Bridging Time Series and Graph Representation Learning via Multiple Attentions</a>" is accepted by TKDE. </li>
<li style="margin:10px"> <b>[Apr. 2021]</b> Our paper "<a href="works/taocode/SIGIR21_Taocode_Diffusion.pdf">How Powerful are Interest Diffusion on Purchasing Prediction: A Case Study of Taocode</a>" is accepted by SIGIR. </li>
<li style="margin:10px"> <b>[Feb. 2021]</b> I am serving as the sponsorship co-chair of the Web Conference (WWW) 2021. <a href="https://www2021.thewebconf.org/sponsor/">If you are interested in sponsoring us, please read this. </a> </li>
<li style="margin:10px"> <b>[Oct. 2020]</b> Our paper "Time-Series Event Prediction with Evolutionary State Graph" is accepted by WSDM 2021. </li>
<li style="margin:10px"> <b>[Sep. 2020]</b> Organizing <a href="https://smp2020.aconf.cn/">SMP'20</a> online, serving as program chair. </li>
<li style="margin:10px"> <b>[Sep. 2020]</b> Our paper "Robust Network Enhancement from Flawed Networks" is accepted by TKDE. </li>
<li style="margin:10px"> <b>[Aug. 2020]</b> Giving a talk at Fudan University. </li>
<li style="margin:10px"> <b>[Jan. 2020]</b> Our paper "Understanding Electricity-Theft Behavior via Multi-Source Data" is accepted by WWW 2020. </li>
<li style="margin:10px"> <b>[Nov. 2019]</b> Our paper "Time2Graph: Revisiting Time Series Modeling with
Dynamic Shapelets" is accepted by AAAI 2020. </li>
<li style="margin:10px"> <b>[Jun. 2019]</b> Our paper "Mining Fraudsters and Fraudulent Strategies in Large-Scale Mobile Social Networks" is accepted by TKDE. </li>
<li style="margin:10px"> <b>[Jan. 2019]</b> Our paper "How Do Your Neighbors Disclose Your Information: Social-Aware Time Series Imputation" is accepted by WWW 2019. </li>
<li style="margin:10px"> <b>[Jan. 2019]</b> Our paper "What Makes a Good Team? A Large-scale Study on the Effect of Team Composition in Honor of Kings" is accepted by WWW 2019. </li>
<li style="margin:10px"><b>[Jan. 2019]</b> Officially promoted to Associate Professor, effective December, 2018. </li>
<li style="margin:10px"> <b>[Jan. 2019]</b> Giving talks at Peking University and Tomorrow Advancing Life (好未来). </li>
<li style="margin:10px"> <b>[Dec. 2018]</b> Giving a talk at East China Normal Univeristy. </li>
<li style="margin:10px"> <b>[Aug. 2018]</b> Host tutorial and data mining forum at <a href="http://smp2018.cips-smp.org/">SMP 2018</a>. Download the slides of all talks <a href="https://pan.baidu.com/disk/home?errno=0&errmsg=Auth%20Login%20Sucess&&bduss=&ssnerror=0&traceid=#/all?vmode=list&path=%2Fsmp18">here</a>. </li>
<li style="margin:10px"> <font color="red"> I am looking for <u> highly-motivated </u> students to work with me. </font> If interested, please drop me a message by email. </li>
-->
</div>
</div>
</div>
</div>
<a name="research"></a>
<div class="container">
<div class="page-header">
<h2>Recent Research</h2>
</div>
<div class="page-header">
<h4>Foundation Model for Brain Signals</h4>
</div>
<img width="310" src="img/brant.jpg" align="left" style="margin-left:20px;margin-right:20px">
<p>
The goal is to establish a universal model for brain signals, enhancing performance in various downstream tasks within the healthcare domain while empowering a quantitative understanding of brain activity in neuroscience.
</P>
<p>
Starting from a real medical scenario of seizure detection, we automatically identify epileptic waves in intracranial brain signals for medication-resistant patients, expediting the localization of lesions within the brain. Inspired by neuroscience research, we initially model the diffusion patterns of epileptic waves for individual patients (<i>BrainNet</i>, <a target="_blank" href="works/brainnet/KDD22_BrainNet.pdf">Chen et al., KDD'22</a>). Subsequently, we employ self-supervised learning to capture universal spatiotemporal correlations between signals from different brain regions, facilitating transferability across different patients (<i>MBrain</i>, <a target="_blank" href="works/brainnet/KDD23_MBrain.pdf">Cai et al., KDD'23</a>; <i>PPi</i>, <a target="_blank" href="works/brainnet/NeurIPS23_PPi.pdf">Yuan et al., NeurIPS'23</a>).
</p>
<p>
To establish a foundational model, we initially endeavor to pretrain a foundational model with 500M parameters based on a large volume of intracranial brain signals (<i>Brant</i>, <a target="_blank" href="works/brainnet/NeurIPS23_Brant.pdf">Zhang et al., NeurIPS'23</a>). Subsequently, we integrate EEG into the pretraining corpus, building a foundational model with 1B parameters, thereby generalizing to a broader range of downstream tasks such as sleep staging and emotion recognition (<i>Brant-2</i>, <a target="_blank" href="https://arxiv.org/abs/2402.10251">arXiv:2402.10251</a>).
Capitalizing on the robust generalization capabilities of Brant-2, we propose a unified alignment framework (<i>Brant-X</i>) to rapidly adapt Brant-2 to downstream tasks involving rare physiological signals (e.g., EOG/ECG/EMG).
In our pursuit to construct a universal foundational model, we recognize the necessity for a comprehensive dataset encompassing a wide array of domains. Confronted with the rarity of brain signal data, we delve into a diffusion-based model, for the generation of intracranial brain signals (<i>NeuralDiff</i>). Moreover, we innovate to synthesize endless sequences, thereby circumventing the dependence on actual data (<i>InfoBoost</i>, <a target="_blank" href="http://arxiv.org/abs/2402.00607">arXiv:2402.00607</a>).
</p>
<!--
The goal is to model intracranial neural signals to empower neural science with the ability to understand brain activities quantitatively.
At this stage, we put our focus on serving epilepsy patients, shorterning their treatment cycle by automatically identifying epileptic foci.
</p>
<p>
Epilepsy is one of the most common and serious neurological disorders, affecting approximately 65 million people worldwide. One-third of patients are medication-resistant, and surgical resection of the lesion tissue through neurosurgery is necessary.
We develop deep models to automatically identify epileptic waves for seizure localization.
To do this, we mainly employ the SEEG method for brain signal recording, which inserts depth electrodes into the human brain and provides recordings from both cortical and subcortical structures simultaneously.
</p>
<p>
We first study epileptic diffusion patterns to identify epileptic waves for a particular individual by training from his or her historical data (<i>BrainNet</i>, <a target="_blank" href="works/brainnet/KDD22_BrainNet.pdf">Chen et al., KDD'22</a>).
To further achieve transferability between different individuals, we propose to learn generalized and individual-independent representations for brain signals via self-supervised pre-training based on capturing spatio-temporal correlations among signals (<i>MBrain</i>, <a target="_blank" href="works/brainnet/KDD23_MBrain.pdf">Cai et al., KDD'23</a>; <i>PPi</i>, <a target="_blank" href="works/brainnet/NeurIPS23_PPi.pdf">Yuan et al., NeurIPS'23</a>).
With the continue extention of both our collected data and downstream tasks, we are working on pre-training foundation model with large parameters (<i>Brant</i>, <a target="_blank" href="works/brainnet/NeurIPS23_Brant.pdf">Zhang et al., NeurIPS'23</a>).
</p>
-->
<p>
<b>Papers: </b>
(<a target="_blank" href="works/brainnet/KDD22_BrainNet.pdf">Chen et al., KDD'22</a>),
(<a target="_blank" href="works/brainnet/KDD23_MBrain.pdf">Cai et al., KDD'23</a>),
(<a target="_blank" href="works/brainnet/NeurIPS23_PPi.pdf">Yuan et al., NeurIPS'23</a>),
(<a target="_blank" href="works/brainnet/NeurIPS23_Brant.pdf">Zhang et al., NeurIPS'23</a>),
(<a target="_blank" href="works/brainnet/KDD24_BrantX.pdf">Zhang et al., KDD'24</a>),
(<a target="_blank" href="https://arxiv.org/abs/2402.10251">Brant-2, arXiv:2402.10251</a>),
(<a target="_blank" href="http://arxiv.org/abs/2402.00607">InfoBoost, arXiv:2402.00607</a>).
</p>
<div class="page-header">
<h4>Foundation Model for Graphs</h4>
</div>
<img width="310" src="img/graphfm.jpg" align="left" style="margin-left:20px;margin-right:20px;margin-bottom:10px">
<p>
The goal is to pre-train a general graph foudnation model using a large corpus of graph data. With appropriate fine-tuning, such a graph foundation model can achieve satisfactory performance across various downstream tasks, showcasing its broad application potential and the numerous challenges it entails.
</p>
<p>
To achieve this goal, we design base graph models with enhanced expressive capabilities (<i><a target="_blank" href="https://github.com/zjunet/DropMessage">DropMessage</a></i>, <a target="_blank" href="works/gnn/AAAI23_DropMessage.pdf">Fang et al., AAAI'23</a>; <i><a target="_blank" href="https://github.com/zjunet/PathNet">PathNet</a></i>, <a target="_blank" href="works/gnn/IJCAI22_Beyond.pdf">Sun et al., IJCAI'22</a>) and investigate how to select appropriate pre-training corpora
(<i><a target="_blank"
href="https://github.com/zjunet/W2PGNN">W2PGNN</a></i>, <a target="_blank" href="works/graph_pretrain/KDD23_When_to_Pre-Train.pdf">Cao et al., KDD'23</a>; <a target="_blank" href="works/gnn/NeurIPS23_Less.pdf">Xu et al., NeurIPS'23</a>). We also conduct an in-depth study on the crucial role of pre-training strategies in the construction of the graph foundation model and analyze existing graph self-supervised methods from a unified perspective (<i><a target="_blank"
href="https://github.com/zjunet/GraphTCM">GraphTCM</a></i>, <a target="_blank" href="works/gnn/ICML2024_Correlation_SSL.pdf">Fang et al., ICML'24</a>). When adapting the pre-trained graph foundation model to downstream tasks, we explore the intrinsic factors that determine the model's final performance (<i><a target="_blank" href="https://github.com/zjunet/G-Tuning">G-Tuning</a></i>, <a target="_blank" href="works/gnn/AAAI24_Tuning.pdf">Sun et al., AAAI'24</a>; <i><a target="_blank"
href="https://github.com/zjunet/Bridge-Tune">Bridge-Tune</a></i>, <a target="_blank" href="works/gnn/AAAI24_Measuring.pdf">Huang et al., AAAI'24</a>) and design various effective and parameter-efficient adaptation methods (<i><a target="_blank" href="https://github.com/zjunet/GPF">GPF</a></i>, <a target="_blank" href="works/gnn/NeurIPS23_Prompt.pdf">Fang et al., NeurIPS'23</a>; Huang et al., KDD'24).
In addition, we have released a large-scale dynamic graph financial network pre-training dataset, <a href="https://dgraph.xinye.com/introduction">DGraph</a> (<a target="_blank" href="works/dgraph/dgraph_2022.pdf">Huang et al., NeurIPS'22</a>), addressing the lack of graph datasets in this field.
</p>
<p>
<b>Papers: </b>
(<a target="_blank" href="works/gnn/AAAI23_DropMessage.pdf">Fang et al., AAAI'23</a>),
(<a target="_blank" href="works/graph_pretrain/KDD23_When_to_Pre-Train.pdf">Cao et al., KDD'23</a>),
(<a target="_blank" href="works/gnn/NeurIPS23_Less.pdf">Xu et al., NeurIPS'23</a>),
(<a target="_blank" href="works/gnn/ICML2024_Correlation_SSL.pdf">Fang et al., ICML'24</a>),
(<a target="_blank" href="works/gnn/AAAI24_Tuning.pdf">Sun et al., AAAI'24</a>),
(<a target="_blank" href="works/gnn/IJCAI22_Beyond.pdf">Sun et al., IJCAI'22</a>),
(<a target="_blank" href="works/gnn/AAAI24_Measuring.pdf">Huang et al., AAAI'24</a>),
(<a target="_blank" href="works/gnn/NeurIPS23_Prompt.pdf">Fang et al., NeurIPS'23</a>),
(<a target="_blank" href="works/dgraph/dgraph_2022.pdf">Huang et al., NeurIPS'22</a>)
</p>
<p>
<b>Codes: </b>
[<a target="_blank" href="https://github.com/zjunet/DropMessage">DropMessage</a>]
[<a target="_blank" href="https://github.com/zjunet/PathNet">PathNet</a>]
[<a target="_blank" href="https://github.com/zjunet/W2PGNN">W2PGNN</a>]
[<a target="_blank" href="https://github.com/zjunet/GraphTCM">GraphTCM</a>]
[<a target="_blank" href="https://github.com/zjunet/G-Tuning">G-Tuning</a>]
[<a target="_blank" href="https://github.com/zjunet/Bridge-Tune">Bridge-Tune</a>]
[<a target="_blank" href="https://github.com/zjunet/GPF">GPF</a>]
</p>
<p>
<b>Dataset:</b>
[<a href="https://dgraph.xinye.com/introduction">DGraph</a>]
</p>
</div>
<div class="container">
<div class="page-header">
<h4>Collaboration Dynamics of Large Language Models</h4>
</div>
<img width="310" src="img/llm.jpg" align="left" style="margin-left:20px;margin-right:20px">
<p>
The goal is to enhance the versatility of Large Language Models (LLMs) across specialized domains. Driven by the reality that many real-world applications, such as financial risk management and power grid scheduling, demand a multidisciplinary strategy, we take inspiration from human ingenuity. Humans have a remarkable ability to navigate complex issues by integrating a wide array of expertise through collaborative efforts. With this in mind, we explore the dynamics of LLMs when they collaborate with domain-specific models to tackle challenging real-world problems.
</p>
<p>
We embark on our journey by examining the synergy between LLMs and Graph Neural Networks (GNNs). GNNs are inherently crafted for processing graph data, a prevalent format in real-world scenarios. We investigate how LLMs can collaborate with GNN to boost its graph reasoning capability (<i><a target="_blank" href="https://github.com/mistyreed63849/Graph-LLM">GraphLLM</a></i>, <a target="_blank" href="https://arxiv.org/abs/2310.05845">arXiv:2310.05845</a>). Additionally, we study the
possibility of LLMs and GNNs collaborating through an innovative framework that positions GNNs as a unique class of adapter modules (<i><a target="_blank" href="https://github.com/zjunet/GraphAdapter">GraphAdapter</a></i>, <a target="_blank" href="works/gnn/WWW24_GraphAdapter.pdf">Huang et al., WWW'24</a>). Furthermore, we explore how LLMs can collaborate specialized agents (<i><a target="_blank" href="https://github.com/zjunet/ETR">ETR</a></i>, <a target="_blank" href="works/llm/An_expert_is_worth_one_token_ACL24.pdf">Chai et al.,
ACL'24</a>), where a unified generalist framework is built to facilitate seamless integration of multiple expert LLMs.
In addition to our theoretical explorations, we have launched key datasets to assess LLMs in specific domains.
For graph-related tasks, we introduce a new dataset from social media, merging text and graph data (<a target="_blank" href="works/gnn/WWW24_GraphAdapter.pdf">Huang et al., WWW'24</a>).
Additionally, for analyzing LLM-based agents' data analytics capabilities, we publish the InfiAgent-DABench benchmark (<i><a target="_blank" href="https://github.com/InfiAgent/InfiAgent">InfiAgent</a></i>, <a target="_blank" href="works/llm/InfiAgent_DABench-0529.pdf">Hu et al., ICML'24</a>).
</p>
<p>
<b>Papers: </b>
[<a target="_blank" href="works/llm/An_expert_is_worth_one_token_ACL24.pdf">Chai et al., ACL'24</a>]
(<a target="_blank" href="works/gnn/WWW24_GraphAdapter.pdf">Huang et al., WWW'24</a>),
(<a target="_blank" href="works/llm/InfiAgent_DABench-0529.pdf">Hu et al., ICML'24</a>),
(<a target="_blank" href="https://arxiv.org/abs/2310.05845">GraphLLM, arXiv:2310.05845</a>).
</p>
<p>
<b>Codes: </b>
[<a target="_blank" href="https://github.com/zjunet/ETR">ETR</a>],
[<a target="_blank" href="https://github.com/mistyreed63849/Graph-LLM">GraphLLM</a>],
[<a target="_blank" href="https://github.com/zjunet/GraphAdapter">GraphAdapter</a>]
</p>
<p>
<b>Benchmark:</b>
[<a target="_blank" href="https://github.com/InfiAgent/InfiAgent">InfiAgent</a>]
</p>
</div>
<!--
<div class="container">
<div class="page-header">
<h4>Anomaly Detection in Graphs</h4>
</div>
<img width="310" src="img/abnormal.jpg" align="left" style="margin-left:20px;margin-right:20px">
<p>
The goal is to understand and detect abnormal vertexes (e.g., users with anomalous behaviors) in large-scale social and information networks.
Generally, we study the question of <i>can abnormality be detected by graph neural networks?</i> Despite the fact that GNN has emerging as a powerful tool for modeling graph data, we find that most GNNs fail to identify abnormalities emprically. We further explore the reason behind this phenomenon in both spectral and spatial view of GNNs and propose the improving methods respectively (<a target="_blank" href="works/gnn/IJCAI22_Abnormality.pdf">Chai et al., IJCAI'22</a>, <a target="_blank" href="works/gnn/IJCAI22_Beyond.pdf">Sun et al., IJCAI'22</a>).
</p>
<p>
Our work has been widely applied in many scenarios. In telecommunications field, we propose to spot telemarketing frauds, with an emphasis on unveiling the "precise fraud" phenomenon and the strategies that are used by fraudsters to precisely select targets (<a target="_blank" href="works/works/telecom_fraud/TKDE_Fraud_Yang.pdf">Yang et al., TKDE'19</a>).
Our study is conducted on a one-month complete dataset of telecommunication metadata in Shanghai with 54 million anonymous users and 698 million call logs.
</p>
<p>
In financial field, we unearth the correlation between users' anomalous behaviors and their communication network structure in an online lending platform.
Moreover, we propose a novel problem: how to identify muti-type fraudsters (<a target="_blank" href="works/loan_fraud/cikm19_loan.pdf">Yang et al., CIKM'19</a>)?
Our proposed framework can uniformly identify two types of frauds:
default borrowers, who will default on a loan to the platform,
and cheating agents, who recruit and teach borrowers to cheat by providing false information and faking application materials.
</p>
<p>
<b>Papers: </b>
(<a target="_blank" href="works/gnn/IJCAI22_Abnormality.pdf">Chai et al., IJCAI'22</a>),
(<a target="_blank" href="works/gnn/IJCAI22_Beyond.pdf">Sun et al., IJCAI'22</a>),
(<a target="_blank" href="works/telecom_fraud/TKDE_Fraud_Yang.pdf">Yang et al., TKDE'19</a>),
(<a target="_blank" href="works/loan_fraud/cikm19_loan.pdf">Yang et al., CIKM'19</a>)
</p>
<p>
<b>Codes: </b>
[<a target="_blank" href="https://github.com/zjunet/AMNet">AMNet</a>],
[<a target="_blank" href="https://github.com/zjunet/PathNet">PathNet</a>]
</p>
<p>
<b>Benchmark:</b>
[<a href="https://dgraph.xinye.com/introduction">DGraph</a>]
</p>
</div>
<div class="container">
<div class="page-header">
<h4>Time2Graph: Revisiting Time Series Modeling from the Perspective of Graphs</h4>
</div>
<img width="310" src="works/t2g/unigraph-1.jpg" align="left" style="margin-left:20px;margin-right:20px">
<p>
Time series modeling has attracted extensive research efforts; however, achieving both reliable efficiency and interpretability from a unified model still remains a challenging problem.
</p>
<p>
Our recent work proposes to model time series from the perspective of graphs. More specifically, we aim to capture the intrinsic factors and their transitions behind the time series, and describe how these factors affect the time series evolution. To achieve this, we respectively propose
the shapelet based method (<i>Time2Graph</i>, <a target="_blank" href="works/t2g/time2graph_aaai20.pdf">Cheng et al., AAAI'20</a>; <i>Time2Graph+</i>, <a target="_blank" href="works/t2g/time2graphplus_tkde21.pdf">Cheng et al., TKDE'21</a>) and a dynamic graph neural network based model (<i>EvoNet</i>, <a target="_blank" href="works/t2g/evonet_wsdm21.pdf">Hu et al., WSDM'21</a>). Our proposed methods not only achieves clear improvements comparing with state-of-the-art baselines in many tasks, but also provide valuable insights towards explaining the results of prediction results.
</p>
Our work has been applied in real-world scenarios, such as network traffic anomaly monitor, <a target="_blank" href="https://help.aliyun.com/document_detail/172132.html?spm=a2c4g.11186623.6.936.5f8e4633MZwHqQ">as a common service of Alicloud</a>, and electricity-theft behavior detection (<a target="_blank" href="works/hebr/HEBR_WWW20.pdf">Hu et al., WWW'20</a>), collaborated with Alibaba and State Grid Corporation of China.
</p>
<p>
<b>Papers: </b>
(<a target="_blank" href="works/t2g/time2graphplus_tkde21.pdf">Cheng et al., TKDE'21</a>),
(<a target="_blank" href="works/t2g/time2graph_aaai20.pdf">Cheng et al., AAAI'20</a>),
(<a target="_blank" href="works/t2g/evonet_wsdm21.pdf">Hu et al., WSDM'21</a>),
(<a target="_blank" href="works/hebr/HEBR_WWW20.pdf">Hu et al., WWW'20</a>)
</p>
<p>
<b>Codes: </b>
<a target="_blank" href="https://petecheng.github.io/Time2Graph">[Time2Graph]</a>,
<a target="_blank" href="https://github.com/petecheng/Time2GraphPlus">[Time2Graph+]</a>,
<a target="_blank" href="https://github.com/zjunet/EvoNet">[EvoNet]</a>
</p>
</div>
<div class="container">
<div class="page-header">
<h4>Learning Robust Graph Models</h4>
</div>
<img width="310" src="img/enhance.gif" align="left" style="margin-left:20px;margin-right:20px">
<p>
Network data in real-world tends to be error-prone due to incomplete sampling or imperfect measurements. This in turn results in inaccurate results when performing network analysis or modeling, such as node classification and link prediction, on these flawed networks.
</p>
<p>
Our research aims to reconstruct a reliable network from a flawed one, a process referred to <i>network enhancement</i>.
More specifically, network enhancement aims to detect the noisy links that are observed in the network but should not exist in the real world, as well as to complement the missing links that do indeed exist in the real world yet remain unobserved.
</p>
From one perspective, we turn the network enhancement problem into edge sequences generation, and employ a deep reinforcement learning framework to solve it, which takes advantage of
downstream task to guide the network denoising process (NetRL, <a target="_blank" href="">Xu et al., TKDE'21</a>).
From another perspective, we construct a self-supervised learning framework that identifies missing links and nosiy links simultaneously by leveraging the mutual influence of them (E-Net, <a target="_blank" href="works/robust/tkde2020_robust_xu.pdf">Xu et al., TKDE'20</a>)
</p>
<p>
Moreover, we study the model robustness against adversarial attacks. Our work shows that even without any information about the target model, one can still perform effective attacks (<a target="_blank" href="works/robust/aaai22_blindfolded_attack.pdf">Xu et al., AAAI'22a</a>).
To handle such perturbations, we further propose an unsupervised defense technique to robustify pre-trained deep graph models (<a target="_blank" href="works/robust/aaai22_robust.pdf">Xu et al., AAAI'22b</a>).
</p>
<p>
<b>Papers: </b>
(<a target="_blank" href="works/robust/tkde2020_robust_xu.pdf">Xu et al., TKDE'20</a>),
(<a target="_blank" href="works/robust/tkde2021_netrl.pdf">Xu et al., TKDE'21</a>),
(<a target="_blank" href="works/robust/aaai22_blindfolded_attack.pdf">Xu et al., AAAI'22a</a>),
(<a target="_blank" href="works/robust/aaai22_robust.pdf">Xu et al., AAAI'22b</a>).
</p>
<p>
<b>Codes & benchmark: </b><a target="_blank" href="https://github.com/galina0217/NetRL">[NetRL]</a>, <a target="_blank" href="https://github.com/galina0217/E-Net">[E-Net]</a>,
[<a target="_blank" href="https://cogdl.ai/grb/home">Graph Robustness Benchmark</a>]
</p>
</div>
-->
<!--
<div class="container">
<div class="page-header">
<h4>Representation Learning for Social Networks</h4>
</div>
<img width="310" src="works/dynamictriad/embedding.jpg" align="left" style="margin-left:20px;margin-right:20px">
<p>
Graph embedding, also known as network representation learning, aims to learn the low-dimensional representations of vertexes in a network, while structure and inherent properties of the graph is preserved.
</p>
<p>
Our research mainly focuses on learning representations for social networks.
Comparing with other networks, social networks have unique properties.
For example, social networks are dynamic and evolving over time, caused by user interactions and unstable user relations. We study how to preserve both structural information and temporal information of a given social network, by modeling triadic closure process (<a target="_blank" href="works/dynamictriad/dynamic_triad.pdf">Zhou et al., AAAI'18</a>).
In particular, the general idea is to impose triad, which is a group of three vertices and is one of the basic units of networks. We model how a closed triad, which consists of three vertices connected with each other, develops from
an open triad that has two of three vertices not connected with each other. This triadic closure process is a fundamental
mechanism in the formation and evolution of networks, thereby makes our model being able to capture the network dynamics and to learn representation vectors for each vertex at different time steps.
</p>
<p>
Besides, social networks are scale-free: vertex degrees of a social network follow a heavy-tailed distribution.
Is it possible to reconstruct a scale-free network according to the learned vertex embedding?
We first theoretically analyze the difficulty of embedding and reconstructing a scale-free network in the Euclidean
space, by converting our problem to the sphere packing problem.
Then, we propose the "degree penalty" principle for designing scale-free property preserving network embedding
algorithm: punishing the proximity between high-degree vertexes.
We introduce two implementations of our principle by utilizing the spectral techniques and a skip-gram model respectively
(<a target="_blank" href="works/scalefree/scale_free_network_embedding.pdf">Feng et al., AAAI'18</a>).
</p>
<p>
<b>Related papers: </b>
(<a target="_blank" href="works/dynamictriad/dynamic_triad.pdf">Zhou et al., AAAI'18</a>),
(<a target="_blank" href="works/scalefree/scale_free_network_embedding.pdf">Feng et al., AAAI'18</a>),
(<a target="_blank" href="works/ge/www_2018_rare.pdf">Gu et al., WWW'18</a>)
</p>
<p>
<b>Related codes and data: </b><a target="_blank" href="https://github.com/luckiezhou/DynamicTriad">[DynamicTriad]</a> <a target="_blank" href="https://github.com/rhythmswing/DP-Spectral">[DP-Spectral]</a>
</p>
</div>
<div class="container">
<div class="page-header">
<h4>Urban Dreams of Migrants: Study of Migrant Integration</h4>
</div>
<img width="295" src="works/migrant/graphical.png" align="left" style="margin-left:20px;margin-right:40px">
<p>
An unprecedented human mobility has driven the rapid urbanization around the world. In China, the fraction of population dwelling in cities increased from 17.9% to 52.6% between 1978 and 2012. Such large-scale migration poses both significant challenges for policymakers and important questions for researchers.
</p>
<p>
To understand the process of migrant integration and help more migrants to realize their urban dreams, we have some exciting ongoing work.
We employ a user telecommunication metadata in Shanghai and study systematic differences between locals and migrants in their mobile communication networks and geographical locations (<a target="_blank" href="works/migrant/urban_dream.pdf">Yang et al., AAAI'18</a>). By distinguishing new migrants (who recently moved to a new city) from settled migrants (who have been in a new city for a while), we demonstrate the integration process of new migrants.
The left figure shows geographical distributions of locals, settled migrants and new migrants in Shanghai.
Moreover, we investigate migrants’ behavior in their first weeks and in particular, how their behavior relates to early departure (<a target="_blank" href="works/migrant/migrant_churn.pdf">Yang et al., WWW'18</a>), by further employing a novel housing price dataset.
</p>
<p>
We hope that our study can encourage more researchers in our community to examine the problem of migrant integration from different perspectives and eventually lead to methodologies and applications that benefit policymaking and millions of migrants.
</p>
<p>
<b>Related papers: </b>
(<a target="_blank" href="works/migrant/migrant_churn.pdf">Yang et al., WWW'18</a>),
(<a target="_blank" href="works/migrant/urban_dream.pdf">Yang et al., AAAI'18</a>)
</p>
<p>
<b>Media coverage: </b>
(<a target="_blank" href="https://www.newscientist.com/article/2134693-phone-metadata-reveals-where-city-migrants-go-and-who-they-call/">NewScientist</a>),
(<a target="_blank" href="http://www.zju.edu.cn/2018/0312/c638a789729/page.htm">浙江大学科学封面</a>)
</p>
<p>
<b>Related housing price data: </b><a target="_blank" href="data.html">[HousingPrice]</a>
</p>
</div>
-->
<br>
<a name="publication"></a>
<div class="container">
<div class="page-header">
<h2>Full Publication List</h2>
<h4>(in chronological order)</h4>
</div>
<!--
<li style="margin:10px"><b> Yang Yang</b>, Jie Tang, Yuxiao Dong, Qiaozhu Mei, Reid A. Johnson, and Nitesh V. Chawla.
Modeling the Interplay Between Individual Behavior and Network Distributions.
(Preprint).
[<a href="http://arxiv.org/pdf/1511.02562v1.pdf">PDF</a>]
</li>
-->
<font size="3">
<b>2024</b>
</font>
<li style="margin:10px">
Shihao Tu, Yupeng Zhang, <a target="_blank" href="https://xiaojingzi.github.io/">Jing Zhang</a>, Zhendong Fu, Yin Zhang, and <b>Yang Yang</b>.
PowerPM: Foundation Model for Power Systems.
In <i>Proceedings of the Thirty-Eighth Annual Conference on Neural Information Processing Systems </i> (<a target="_blank" href="https://neurips.cc/Conferences/2024">NeurIPS'24</a>), 2024.
[<a target="_blank" href="works/power/NeurIPS24_PowerPM.pdf">PDF</a>]
</li>
<li style="margin:10px">
Shihao Tu, <a target="_blank" href="https://caolinfeng.github.io/homepage/">Linfeng Cao</a>, Daoze Zhang, <a target="_blank" href="https://mrnobodycali.github.io/">Junru Chen</a>, Lvbin Ma, Yin Zhang, and <b>Yang Yang</b>.
DMNet: Self-comparison Driven Model for Subject-independent Seizure Detection.
In <i>Proceedings of the Thirty-Eighth Annual Conference on Neural Information Processing Systems </i> (<a target="_blank" href="https://neurips.cc/Conferences/2024">NeurIPS'24</a>), 2024.
[<a target="_blank" href="works/brainnet/NeurIPS24_DMNet.pdf">PDF</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://mrnobodycali.github.io/">Junru Chen</a>, Tianyu Cao, Jing Xu, Jiahe Li, Zhilong Chen, Tao Xiao, and <b>Yang Yang</b>.
Con4m: Context-aware Consistency Learning Framework for Segmented Time Series Classification.
In <i>Proceedings of the Thirty-Eighth Annual Conference on Neural Information Processing Systems </i> (<a target="_blank" href="https://neurips.cc/Conferences/2024">NeurIPS'24</a>), 2024.
[<a target="_blank" href="works/brainnet/NeurIPS24_Con4M.pdf">PDF</a>]
</li>
<li style="margin:10px">
Renhong Huang, <a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, Zhiming Yang, Xiang Si, Xin Jiang, Hanyang Yuan, Chunping Wang, and <b>Yang Yang</b>.
Extracting Training Data from Molecular Pre-trained Models.
In <i>Proceedings of the Thirty-Eighth Annual Conference on Neural Information Processing Systems </i> (<a target="_blank" href="https://neurips.cc/Conferences/2024">NeurIPS'24</a>), 2024.
[<a target="_blank" href="works/gnn/NeurIPS24_Molextract.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/Molextract/Data-Extraction-from-Molecular-Pre-trained-Model">Code</a>]
</li>
<li style="margin:10px">
Hanyang Yuan, <a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, Renhong Huang, Mingli Song, Chunping Wang, and <b>Yang Yang</b>.
Towards More Efficient Property Inference Attacks on Graph Neural Networks.
In <i>Proceedings of the Thirty-Eighth Annual Conference on Neural Information Processing Systems </i> (<a target="_blank" href="https://neurips.cc/Conferences/2024">NeurIPS'24</a>), 2024.
[<a target="_blank" href="works/gnn/NeurIPS24_Attack.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/GPIA_NIPS">Code</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://daozezhang.github.io">Daoze Zhang</a>, Zhizhang Yuan, <a target="_blank" href="https://mrnobodycali.github.io/">Junru Chen</a>, Kerui Chen, and <b>Yang Yang</b>.
Brant-X: A Unified Physiological Signal Alignment Framework.
In <i>Proceedings of the 30th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining </i> (<a target="_blank" href="https://kdd2024.kdd.org/">KDD'24</a>), 2024.
[<a target="_blank" href="works/brainnet/KDD24_BrantX.pdf">PDF</a>]
</li>
<li style="margin:10px">
Juren Li*, Fanzhe Fu*, Ran Wei, <a target="_blank" href="https://sunefei.github.io/">Yifei Sun</a>, Zeyu Lai, Ning Song, Xin Chen, and <b>Yang Yang</b>.
Chromosomal Structural Abnormality Diagnosis by Homologous Similarity.
In <i>Proceedings of the 30th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining </i> (<a target="_blank" href="https://kdd2024.kdd.org/">KDD'24</a>), 2024 (*: equal contribution).
[<a target="_blank" href="works/application/KDD24_Chromosome.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/JuRenGithub/HomNet">Code</a>]
</li>
<li style="margin:10px">
Renhong Huang, <a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, Xin Jiang, Ruichuan An, and <b>Yang Yang</b>.
Can Modifying Data Address Graph Domain Adaptation?
In <i>Proceedings of the 30th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining </i> (<a target="_blank" href="https://kdd2024.kdd.org/">KDD'24</a>), 2024.
[<a target="_blank" href="works/gnn/KDD24_Adaptation.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/GraphAlign">Code</a>]
</li>
<li style="margin:10px">
Hanyang Yuan, <a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, Cong Wang, Ziqi Yang, Chunping Wang, Keting Yin, and <b>Yang Yang</b>.
Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data.
In <i>Proceedings of the 30th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining </i> (<a target="_blank" href="https://kdd2024.kdd.org/">KDD'24</a>), 2024 (*: equal contribution).
[<a target="_blank" href="works/gnn/KDD24_Privacy.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/GPS_KDD">Code</a>]
</li>
<li style="margin:10px">
Ziwei Chai, Guoyin Wang, Jing Su, Tianjie Zhang, Xuanwen Huang, Xuwu Wang, Jingjing Xu, Jianbo Yuan, Hongxia Yang, Fei Wu, and <b>Yang Yang</b>.
An Expert is Worth One Token: Synergizing Multiple Expert LLMs as Generalist via Expert Token Routing.
In <i>Proceedings of the 62th Annual Meeting of the Association of Computational Linguistics </i> (<a target="_blank" href="https://2024.aclweb.org/">ACL'24</a>), 2024.
[<a target="_blank" href="works/llm/An_expert_is_worth_one_token_ACL24.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/ETR">Code</a>]
</li>
<li style="margin:10px">
Taoran Fang, Wei Zhou, <a target="_blank" href="https://sunefei.github.io/">Yifei Sun</a>, Kaiqiao Han, Lvbin Ma, and <b>Yang Yang</b>.
Exploring Correlations of Self-Supervised Tasks for Graphs.
In <i>Proceedings of the 41st International Conference on Machine Learning </i> (<a target="_blank" href="https://icml.cc/Conferences/2024">ICML'24</a>), 2024.
[<a target="_blank" href="works/gnn/ICML2024_Correlation_SSL.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/GraphTCM">Code</a>]
</li>
<li style="margin:10px">
Xueyu Hu, Ziyu Zhao, Shuang Wei, Ziwei Chai, Qianli Ma, Guoyin Wang, Xuwu Wang, Jing Su, Jingjing Xu, Ming Zhu, Yao Cheng, Jianbo Yuan,
Jiwei Li, Kun Kuang, <b>Yang Yang</b>, Hongxia Yang, and Fei Wu.
InfiAgent-DABench: Evaluating Agents on Data Analysis Tasks.
In <i>Proceedings of the 41st International Conference on Machine Learning </i> (<a target="_blank" href="https://icml.cc/Conferences/2024">ICML'24</a>), 2024.
[<a target="_blank" href="works/llm/InfiAgent_DABench-0529.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/InfiAgent/InfiAgent">InfiAgent</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://haorandeng.github.io/">Haoran Deng</a>, <b>Yang Yang</b>, Jiahe Li, Cheng Chen, Weihao Jiang, and Shiliang Pu.
Fast Updating Truncated SVD for Representation Learning with Sparse Matrices.
In <i>Proceedings of the 13th International Conference on Learning Representations </i> (<a target="_blank" href="https://iclr.cc/Conferences/2024">ICLR'24</a>), 2024.
[<a target="_blank" href="works/damf/ICLR24_FastSVD.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/IncSVD">Code</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://caolinfeng.github.io/homepage/">Linfeng Cao</a>, <a target="_blank" href="https://haorandeng.github.io/">Haoran Deng</a>, <b>Yang Yang</b>, Chunping Wang, and Lei Chen.
Graph-Skeleton: ∼1% Nodes are Sufficient to Represent Billion-Scale Graph.
In <i>Proceedings of the 33rd Web Conference</i> (<a target="_blank" href="https://www2024.thewebconf.org/">WWW'24</a>), 2024.
[<a target="_blank" href="works/gnn/WWW24_GraphSkeleton.pdf">PDF</a>]
[<a target="_blank" href="works/gnn/WWW24_GraphSkeleton_Long.pdf">Long Version</a>]
[<a target="_blank" href="https://github.com/zjunet/GraphSkeleton">Code</a>]
</li>
<li style="margin:10px">
Xuanwen Huang, Kaiqiao Han, <b>Yang Yang</b>, Dezheng Bao, Quanjin Tao, Ziwei Chai, and Qi Zhu.
Can GNN be Good Adapter for LLMs?
In <i>Proceedings of the 33rd Web Conference</i> (<a target="_blank" href="https://www2024.thewebconf.org/">WWW'24</a>), 2024.
[<a target="_blank" href="works/gnn/WWW24_GraphAdapter.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/GraphAdapter">Code</a>]
</li>
<li style="margin:10px">
Juren Li, <b>Yang Yang</b>, Youmin Chen, Jianfeng Zhang, Zeyu Lai, and Lujia Pan.
DWLR: Domain Adaptation under Label Shift for Wearable Sensor.
In <i>Proceedings of the 33rd International Joint Conference on Artificial Intelligence</i> (<a target="_blank" href="https://ijcai24.org">IJCAI'24</a>), 2024.
[<a target="_blank" href="works/domain/IJCAI24_DWLR.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/JuRenGithub/DWLR">Code</a>]
</li>
<li style="margin:10px">
Youmin Chen*, Xinyu Yan*, <b>Yang Yang</b>, Jianfeng Zhang, <a target="_blank" href="https://xiaojingzi.github.io/">Jing Zhang</a>, Lujia Pan, and Juren Li.
Disentangling Domain and General Representations for Time Series Classification.
In <i>Proceedings of the 33rd International Joint Conference on Artificial Intelligence</i> (<a target="_blank" href="https://ijcai24.org">IJCAI'24</a>), 2024 (*: equal contribution).
[<a target="_blank" href="works/domain/IJCAI24_disentangling.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/IJCAI-CADT/cadt">Code</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://sunefei.github.io/">Yifei Sun</a>, Qi Zhu, <b>Yang Yang</b>, Chunping Wang, Tianyu Fan, Jiajun Zhu, and Lei Chen.
Fine-tuning Graph Neural Networks by Preserving Graph Generative Patterns.
In <i>Proceedings of the 35th AAAI Conference on Artificial Intelligence </i> (<a target="_blank" href="https://aaai.org/Conferences/AAAI-24/">AAAI'24</a>), 2024.
[<a target="_blank" href="works/gnn/AAAI24_Tuning.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/G-Tuning">Code</a>]
</li>
<li style="margin:10px">
Renhong Huang, <a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, Xin Jiang, Chenglu Pan, Zhiming Yang, Chunping Wang, and <b>Yang Yang</b>.
Measuring Task Similarity and Its Implication in Fine-Tuning Graph Neural Networks.
In <i>Proceedings of the 35th AAAI Conference on Artificial Intelligence </i> (<a target="_blank" href="https://aaai.org/Conferences/AAAI-24/">AAAI'24</a>), 2024.
[<a target="_blank" href="works/gnn/AAAI24_Measuring.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/Bridge-Tune">Code</a>]
</li>
<li style="margin:10px">
Chenglu Pan, <a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, Yue Yu, Ziqi Yang, Qingbiao Wu, Chunping Wang, Lei Chen, and <b>Yang Yang</b>.
Towards Fair Graph Federated Learning via Incentive Mechanisms.
In <i>Proceedings of the 35th AAAI Conference on Artificial Intelligence </i> (<a target="_blank" href="https://aaai.org/Conferences/AAAI-24/">AAAI'24</a>), 2024.
[<a target="_blank" href="works/gnn/AAAI24_Federated.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/FairGraphFL">Code</a>]
</li>
<font size="3">
<b>2023</b>
</font>
<li style="margin:10px">
Taoran Fang, Zhiqing Xiao, Chunping Wang, <a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, Xuan Yang, and <b>Yang Yang</b>.
DropMessage: Unifying Random Dropping for Graph Neural Networks.
In <i>Proceedings of the 35th AAAI Conference on Artificial Intelligence </i> (<a target="_blank" href="https://aaai.org/Conferences/AAAI-23/">AAAI'23</a>), 2023.
[<a target="_blank" href="works/gnn/AAAI23_DropMessage.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/DropMessage">Code</a>]
<b><font color="red"> (Distinguished Paper Award)</font></b>
</li>
<li style="margin:10px">
Taoran Fang, Yunchao Zhang, <b>Yang Yang</b>, Chunping Wang, and Lei Chen.
Universal Prompt Tuning for Graph Neural Networks.
In <i>Proceedings of the Thirty-Seventh Annual Conference on Neural Information Processing Systems</i> (<a href="https://nips.cc/Conferences/2023">NeurIPS'23</a>), 2023.
[<a target="_blank" href="works/gnn/NeurIPS23_Prompt.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/GPF">Code</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://daozezhang.github.io">Daoze Zhang</a>*, Zhizhang Yuan*, <b>Yang Yang</b>, <a target="_blank" href="https://mrnobodycali.github.io/">Junru Chen</a>, Jingjing Wang, and Yafeng Li.
Brant: Foundation Model for Intracranial Neural Signal.
In <i>Proceedings of the Thirty-Seventh Annual Conference on Neural Information Processing Systems</i> (<a href="https://nips.cc/Conferences/2023">NeurIPS'23</a>), 2023 (*: equal contribution).
[<a target="_blank" href="works/brainnet/NeurIPS23_Brant.pdf">PDF</a>]
[<a target="_blank" href="https://zju-brainnet.github.io/Brant.github.io/">Website</a>]
</li>
<li style="margin:10px">
Zhizhang Yuan*, <a target="_blank" href="https://daozezhang.github.io">Daoze Zhang</a>*, <b>Yang Yang</b>, <a target="_blank" href="https://mrnobodycali.github.io/">Junru Chen</a>, and Yafeng Li.
PPi: Pretraining Brain Signal Model for Patient-independent Seizure Detection.
In <i>Proceedings of the Thirty-Seventh Annual Conference on Neural Information Processing Systems</i> (<a href="https://nips.cc/Conferences/2023">NeurIPS'23</a>), 2023 (*: equal contribution).
[<a target="_blank" href="works/brainnet/NeurIPS23_PPi.pdf">PDF</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, Renhong Huang, Xin Jiang, Yuxuan Cao, Carl Yang, Chunping Wang, and <b>Yang Yang</b>.
Better with Less: A Data-Centric Prespective on Pre-Training Graph Neural Networks.
In <i>Proceedings of the Thirty-Seventh Annual Conference on Neural Information Processing Systems</i> (<a href="https://nips.cc/Conferences/2023">NeurIPS'23</a>), 2023.
[<a target="_blank" href="works/gnn/NeurIPS23_Less.pdf">PDF</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://haorandeng.github.io/">Haoran Deng</a>, <b>Yang Yang</b>, Jiahe Li, Haoyang Cai, Shiliang Pu, and Weihao Jiang.
Accelerating Dynamic Network Embedding with Billions of Parameter Updates to Milliseconds.
In <i>Proceedings of the Twenty-Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</i> (<a href="http://kdd.org/kdd2023/">KDD'23</a>), 2023.
[<a target="_blank" href="works/damf/KDD_23_Accelerating_Dynamic_Network_Embedding_with_Billions_of_Parameters_Update_to_Milleseconds.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/DAMF">Code</a>]
</li>
<li style="margin:10px">
Donghong Cai*, <a target="_blank" href="https://mrnobodycali.github.io/">Junru Chen</a>*, <b>Yang Yang</b>, Teng Liu, and Yafeng Li.
MBrain: A Multi-channel Self-Supervised Learning Framework for Brain Signals.
In <i>Proceedings of the Twenty-Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</i> (<a href="http://kdd.org/kdd2023/">KDD'23</a>), 2023 (*: equal contribution).
[<a target="_blank" href="works/brainnet/KDD23_MBrain.pdf">PDF</a>]
</li>
<li style="margin:10px">
Yuxuan Cao*, <a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>*, Carl Yang, Jiaan Wang, Yunchao Mercer Zhang, Chunping Wang, Lei Chen, and <b>Yang Yang</b>.
When to Pre-Train Graph Neural Networks? An Answer from Data Generation Perspective!
In <i>Proceedings of the Twenty-Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</i> (<a href="http://kdd.org/kdd2023/">KDD'23</a>), 2023 (*: equal contribution).
[<a target="_blank" href="works/graph_pretrain/KDD23_When_to_Pre-Train.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/W2PGNN">Code</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://xuanyang19.github.io/">Xuan Yang</a>, <b>Yang Yang</b>, <a target="_blank" href="https://www.chenhaot.com/">Chenhao Tan</a>, Yinghe Lin, Zhengzhe Fu, Fei Wu, and Yueting Zhuang.
Unfolding and Modeling the Recovery Process after COVID Lockdowns.
In <i><a target="_blank" href="https://www.nature.com/srep/">Nature Scientific Reports</a></i>, 2023.
[<a target="_blank" href="works/covid/COVID_Recovery_2023.pdf">PDF</a>]
[<a target="_blank" href="https://www.nature.com/articles/s41598-023-30100-5">Online</a>]
[<a target="_blank" href="works/covid/COVID_poster.pdf">Poster</a>]
</li>
<li style="margin:10px">
Ziwei Chai, <b>Yang Yang</b>, Jiawang Dan, Sheng Tian, Changhua Meng, Weiqiang Wang, and <a target="_blank" href="https://sunefei.github.io/">Yifei Sun</a>.
Towards Learning to Discover Money Laundering Sub-network in Massive Transaction Network.
In <i>Proceedings of the 35th AAAI Conference on Artificial Intelligence </i> (<a target="_blank" href="https://aaai.org/Conferences/AAAI-23/">AAAI'23</a>), 2023.
[<a target="_blank" href="works/gnn/AAAI23_Laundering.pdf">PDF</a>]
</li>
<li style="margin:10px">
Boning Zhang and <b>Yang Yang</b>.
MediaHG: Rethinking Eye-catchy Features in Social Media Headline Generation.
In <i>Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing </i> (<a target="_blank" href="https://2023.emnlp.org/">EMNLP'23</a>), 2023.
[<a target="_blank" href="works/social/EMNLP23_MediaHG.pdf">PDF</a>]
</li>
<li style="margin:10px">
Yukuo Cen, Zhenyu Hou, Yan Wang, Qibin Chen, Yizhen Luo, Zhongming Yu, Hengrui Zhang, Xingcheng Yao, Aohan Zeng, Shiguang Guo, <a target="_blank" href="https://ericdongyx.github.io/">Yuxiao Dong</a>, <b>Yang Yang</b>, Peng Zhang, Guohao Dai, Yu Wang, Chang Zhou, Hongxia Yang, and <a target="_blank" href="http://keg.cs.tsinghua.edu.cn/jietang/">Jie Tang</a>.
CogDL: A Comprehensive Library for Graph Deep Learning.
In <i>Proceedings of the Web Conference 2023</i> (<a target="_blank" href="https://www2023.thewebconf.org/"</a>WWW'23</a>), 2023.
[<a target="_blank" href="works/gnn/WWW23_CogDL.pdf">PDF</a>]
</li>
<font size="3">
<b>2022</b>
</font>
<li style="margin:10px">
<a target="_blank" href="https://mrnobodycali.github.io/">Junru Chen</a>*, <b>Yang Yang*</b>, Tao Yu, Yingying Fan, Xiaolong Mo, and Carl Yang.
BrainNet: Epileptic Wave Detection from SEEG with Hierarchical Graph Diffusion Learning.
In <i>Proceedings of the Twenty-Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining</i> (<a href="http://kdd.org/kdd2022/">KDD'22</a>), 2022 (*: equal contribution).
[<a target="_blank" href="works/brainnet/KDD22_BrainNet.pdf">PDF</a>]
</li>
<li style="margin:10px">
Xuanwen Huang, <b>Yang Yang</b>, Yang Wang, Chunping Wang, Zhisheng Zhang, <a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, Lei Chen and Michalis Vazirgiannis.
DGraph: A Large-Scale Financial Dataset for Graph Anomaly Detection.
In <i>Proceedings of the 36th Conference on Neural Information Processing Systems</i> (<a target="_blank" href="https://nips.cc/Conferences/2022/">NeurIPS’22</a>), 2022.
[<a target="_blank" href="works/dgraph/dgraph_2022.pdf">PDF</a>]
[<a target="_blank" href="https://dgraph.xinye.com/introduction">DGraph Data</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://xuanyang19.github.io/">Xuan Yang</a>, <b>Yang Yang</b>, Jintao Su, <a target="_blank" href="https://sunefei.github.io/">Yifei Sun</a>, Shen Fan, Zhongyao Wang, Jun Zhan, and Jingmin Chen.
Who's Next: Rising Star Prediction via Diffusion of User Interest in Social Networks.
In <i> IEEE Transaction on Knowledge and Data Engineering</i> (TKDE), 2022 (accepted). [<a target="_blank" href="works/taocode/TKDE22_rising_star.pdf">PDF</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, <b>Yang Yang</b>, <a target="_blank" href="https://mrnobodycali.github.io/">Junru Chen</a>, Xin Jiang, Chunping Wang, Jiangang Lu, and <a target="_blank" href="http://web.cs.ucla.edu/~yzsun/">Yizhou Sun</a>.
Unsupervised Adversarially Robust Representation Learning on Graphs.
In <i>Proceedings of the 34th AAAI Conference on Artificial Intelligence </i> (<a target="_blank" href="https://aaai.org/Conferences/AAAI-22/">AAAI'22</a>), 2022.
[<a target="_blank" href="works/robust/aaai22_robust.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/galina0217/robustgraph">Code</a>]
</li>
<li style="margin:10px">
Ziwei Chai*, Siqi You*, <b>Yang Yang</b>, Shiliang Pu, <a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, Haoyang Cai, and Weihao Jiang.
Can Abnormality be Detected by Graph Neural Networks?
In <i>Proceedings of the 31st International Joint Conference on Artificial Intelligence</i>
(<a target="_blank" href="https://www.ijcai-22.org/">IJCAI'22</a>), 2022 (*: equal contribution).
[<a target="_blank" href="works/gnn/IJCAI22_Abnormality.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/AMNet">Code</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://sunefei.github.io/">Yifei Sun</a>, <a target="_blank" href="https://haorandeng.github.io/">Haoran Deng</a>, <b>Yang Yang</b>, Chunping Wang, <a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, Renhong Huang, <a target="_blank" href="https://caolinfeng.github.io/homepage/">Linfeng Cao</a>, Yang Wang, and Lei Chen.
Beyond Homophily: Structure-aware Path Aggregation Graph Neural Network.
In <i>Proceedings of the 31st International Joint Conference on Artificial Intelligence</i>
(<a target="_blank" href="https://www.ijcai-22.org/">IJCAI'22</a>), 2022.
[<a target="_blank" href="works/gnn/IJCAI22_Beyond.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/PathNet">Code</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, <a target="_blank" href="http://web.cs.ucla.edu/~yzsun/">Yizhou Sun</a>, Xin Jiang, Yanhao Wang, Chunping Wang, Jiangang Lu, and <b>Yang Yang</b>.
Blindfolded Attackers Still Threatning: Strict Black-Box Adversarial Attacks on Graphs.
In <i>Proceedings of the 34th AAAI Conference on Artificial Intelligence </i> (<a target="_blank" href="https://aaai.org/Conferences/AAAI-22/">AAAI'22</a>), 2022.
[<a target="_blank" href="works/robust/aaai22_blindfolded_attack.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/galina0217/stack">Code</a>]
</li>
<li style="margin:10px">
Lei Chen, Guanying Li, <a target="_blank" href="http://www.sdspeople.fudan.edu.cn/zywei/">Zhongyu Wei</a>, <b>Yang Yang</b>, Baohua Zhou, Qi Zhang, Xuanjing Huang.
A Progressive Framework for Role-Aware Rumor Resolution.
In <i>Proceedings of the 29th International Conference on Computational Linguistics</i> (<a target="_blank" href="https://coling2022.org">COLING'22</a>), 2022, pages 2748–2758.
[<a target="_blank" href="works/others/coling22-rumor.pdf">PDF</a>]
</li>
<font size="3">
<b>2021</b>
</font>
<li style="margin:10px">
<a target="_blank" href="https://galina0217.github.io">Jiarong Xu</a>, <b>Yang Yang</b>, Shiliang Pu, Yao Fu, Jun Feng, Weihao Jiang, Jiangang Lu, and Chunping Wang.
NetRL: Task-aware Network Denoising via Deep Reinforcement Learning.
In <i> IEEE Transaction on Knowledge and Data Engineering</i> (TKDE), 2021.
[<a target="_blank" href="works/robust/tkde2021_netrl.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/galina0217/NetRL">Code</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://petecheng.github.io/">Ziqiang Cheng</a>, <b>Yang Yang</b>, Shuo Jiang, Wenjie Hu, Zhangchi Ying, Ziwei Chai, and Chunping Wang.
Time2Graph+: Bridging Time Series and Graph Representation Learning via Multiple Attentions.
In <i> IEEE Transaction on Knowledge and Data Engineering</i> (TKDE), 2021.
[<a target="_blank" href="works/t2g/time2graphplus_tkde21.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/petecheng/Time2GraphPlus">Code</a>]
</li>
<li style="margin:10px">
Xuanwen Huang, <b>Yang Yang</b>, <a target="_blank" href="https://petecheng.github.io/">Ziqiang Cheng</a>, Shen Fan, Zhongyao Wang, Juren Li, Jun Zhang, and Jingmin Chen.
How Powerful are Interest Diffusion on Purchasing Prediction: A Case Study of Taocode.
In <i>Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval </i>
(<a target="_blank" href="http://www.wsdm-conference.org/2021/">SIGIR'21</a>), 2021.
[<a target="_blank" href="works/taocode/SIGIR21_Taocode_Diffusion.pdf">PDF</a>]
</li>
<li style="margin:10px">
Ping Shao, <b>Yang Yang</b>, Shengyao Xu, and Chunping Wang.
Network Embedding via Motifs.
In <i>ACM Transactions on Knowledge Discovery from Data </i>(<a target="_blank" href="http://tkdd.acm.org/">TKDD</a>), 2021.
[<a target="_blank" href="works/motif/TKDD21_Motif.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/larry2020626/LEMON">Code</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://vachelhu.github.io/">Wenjie Hu</a>, <b>Yang Yang</b>, <a target="_blank" href="https://petecheng.github.io/">Ziqiang Cheng</a>, <a target="_blank" href="http://jiyang3.web.engr.illinois.edu/">Carl Yang</a>, and <a target="_blank" href="http://ink-ron.usc.edu/xiangren/">Xiang Ren</a>.
Time-Series Event Prediction with Evolutionary State Graph.
In <i>Proceedings of the 14th ACM International Conference on Web Search and Data Mining</i>
(<a target="_blank" href="http://www.wsdm-conference.org/2021/">WSDM'21</a>), 2021.
[<a target="_blank" href="works/t2g/evonet_wsdm21.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/zjunet/EvoNet">Code</a>]
[<a target="_blank" href="https://help.aliyun.com/document_detail/172132.html?spm=a2c4g.11186623.6.1107.578371bdiiqDwk">Demo</a>]
</li>
<li style="margin:10px">
Qinkai Zheng, Xu Zou, <a target="_blank" href="https://ericdongyx.github.io/">Yuxiao Dong</a>, Yukuo Cen, Da Yin, <a target="_blank" href="https://galina0217.github.io/">Jiarong Xu</a>, <b>Yang Yang</b>, and <a target="_blank" href="http://keg.cs.tsinghua.edu.cn/jietang/">Jie Tang</a>.
Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning.
In <i>Proceedings of the 35th Conference on Neural Information Processing Systems</i> (<a target="_blank" href="https://nips.cc/Conferences/2021/">NeurIPS’21</a>), 2021.
[<a target="_blank" href="https://openreview.net/pdf?id=NxWUnvwFV4">PDF</a>]
[<a target="_blank" href="https://cogdl.ai/grb/home">Graph Robustness Benchmark</a>]
</li>
<!--
<li style="margin:10px">
Jiarong Xu, Junru Chen, Siqi You, Zhiqing Xiao, <b>Yang Yang</b> and Jiangang Lu.
Robustness of deep learning models on graphs: A survey.
In <i>AI Open</i>, Volume 2, 2021, pages 69-78.
[<a target="_blank" href="works/others/robust_survey.pdf">PDF</a>]
</li>
-->
<font size="3">
<b>2020</b>
</font>
<li style="margin:10px"><a target="_blank" href="https://petecheng.github.io/">Ziqiang Cheng</a>,<b> Yang Yang</b>, Wei Wang, Wenjie Hu, Yueting Zhuang, and <a target="_blank" href="https://www.gjsong-pku.cn/">Guojie Song</a>.
Time2Graph: Revisiting Time Series Modeling with Dynamic Shapelets.
In <i> Proceedings of the 32nd AAAI Conference on Artificial Intelligence </i>
(<a target="_blank" href="https://aaai.org/Conferences/AAAI-20/">AAAI'20</a>), 2020.
[<a target="_blank" href="works/t2g/time2graph_aaai20.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/petecheng/Time2Graph">Code</a>]
</li>
<li style="margin:10px"><a target="_blank" href="https://vachelhu.github.io/">Wenjie Hu</a>,<b> Yang Yang</b>, Jianbo Wang, Xuanwen Huang, and Ziqiang Cheng.
Understanding Electricity-Theft Behavior via Multi-Source Data.
In <i> Proceedings of the 29th World Wide Web Conference</i>
(<a target="_blank" href="https://www2020.thewebconf.org/">WWW'20</a>), 2020.
[<a target="_blank" href="works/hebr/HEBR_WWW20.pdf">PDF</a>]
[<a target="_blank" href="works/hebr/theft_www_talk_v1.1.pptx">Slides</a>]
<!--[<a target="_blank" href="https://github.com/zjunet/HEBR">Code</a>]-->
</li>
<li style="margin:10px"><a target="_blank" href="https://galina0217.github.io/">Jiarong Xu</a>,<b> Yang Yang</b>, Chunping Wang, Zongtao Liu, <a target="_blank" href="https://xiaojingzi.github.io/">Jing Zhang</a>, Lei Chen, and Jiangang Lu.
Robust Network Enhancement from Flawed Networks.
In <i> IEEE Transaction on Knowledge and Data Engineering</i> (TKDE), 2020.
[<a target="_blank" href="works/robust/tkde2020_robust_xu.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/galina0217/E-Net">Code</a>]
</li>
<li style="margin:10px">
Xiaobin Tang, <a target="_blank" href="https://xiaojingzi.github.io/">Jing Zhang</a>, Bo Chen, <b> Yang Yang</b>, Hong Chen, and Cuiping Li.
BERT-INT: A BERT-based Interaction Model For Knowledge Graph Alignment.
In <i> Proceedings of the 30th International Joint Conference on Artificial Intelligence</i>
(<a target="_blank" href="https://www.ijcai20.org/">IJCAI'20</a>), 2020.
[<a target="_blank" href="works/bert-int/InteractionKA.pdf">PDF</a>]
</li>
<li style="margin:10px">
<a target="_blank" href="https://vachelhu.github.io/">Wenjie Hu</a>, <b>Yang Yang</b>, Liang Wu, Zongtao Liu, Zhanlin Sun, and Bingshen Yao.
Capturing Evolution Genes for Time Series Data.
<i>Preprint</i>.
[<a target="_blank" href="https://arxiv.org/abs/1905.05004">arxiv</a>]
</li>
<li style="margin:10px">
Rui Feng, <b>Yang Yang</b>, Yuehan Lyu, <a target="_blank" href="https://www.chenhaot.com/">Chenhao Tan</a>, <a target="_blank" href="http://web.cs.ucla.edu/~yzsun/">Yizhou Sun</a>, and Chunping Wang.
Learning Fair Representations via an Adversarial Framework.
<i>Preprint</i>.
[<a target="_blank" href="https://export.arxiv.org/pdf/1904.13341">arxiv</a>]
</li>
<font size="3">
<b>2019</b>
</font>
<li style="margin:10px"><b> Yang Yang</b>, Yuhong Xu, <a target="_blank" href="http://web.cs.ucla.edu/~yzsun/">Yizhou Sun</a>, <a target="_blank" href="https://ericdongyx.github.io/">Yuxiao Dong</a>, Fei Wu, and Yueting Zhuang.
Mining Fraudsters and Fraudulent Strategies in Large-Scale Mobile Social Networks.
In <i> IEEE Transaction on Knowledge and Data Engineering</i> (TKDE), 2019.
[<a target="_blank" href="works/telecom_fraud/TKDE_Fraud_Yang.pdf">PDF</a>]
</li>
<li style="margin:10px">Zongtao Liu,<b> Yang Yang</b>, Wei Huang, Zhongyi Tang, Ning Li, and Fei Wu.
How Do Your Neighbors Disclose Your Information: Social-Aware Time Series Imputation.
In <i> Proceedings of the Twenty-Eighth World Wide Web Conference</i>
(<a target="_blank" href="https://www2019.thewebconf.org/">WWW'19</a>), 2019.
[<a target="_blank" href="works/imputation/imputation.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/tomstream/STI">Code</a>]
[<a target="_blank" href="bibtex/27.html">BIB</a>]
</li>
<li style="margin:10px">Ziqiang Cheng,<b> Yang Yang</b>, <a target="_blank" href="https://www.chenhaot.com/">Chenhao Tan</a>, Denny Cheng, Alex Cheng, and Yueting Zhuang.
What Makes a Good Team? A Large-scale Study on the Effect of Team Composition in Honor of Kings.
In <i> Proceedings of the Twenty-Eighth World Wide Web Conference</i>
(<a target="_blank" href="https://www2019.thewebconf.org/">WWW'19</a>, short paper), 2019.
[<a target="_blank" href="works/arena/arenamain.pdf">PDF</a>]
[<a target="_blank" href="bibtex/26.html">BIB</a>]
[<a target="_blank" href="https://arxiv.org/abs/1902.06432">long version on arxiv</a>]
</li>
<li style="margin:10px"><b> Yang Yang*</b>, Yuhong Xu*, Chunping Wang, <a target="_blank" href="http://web.cs.ucla.edu/~yzsun/">Yizhou Sun</a>, Fei Wu, Yueting Zhuang, and Ming Gu.
Understanding Default Behavior in Online Lending.
In <i> Proceedings of the Twenty-Eighth Conference on Information and Knowledge Management</i>
(<a target="_blank" href="http://www.cikm2019.net/">CIKM'19</a>), 2019 (*: equal contribution).
[<a target="_blank" href="works/loan_fraud/cikm19_loan.pdf">PDF</a>]
[<a target="_blank" href="bibtex/28.html">BIB</a>]
</li>
<li style="margin:10px">Rui Feng,<b> Yang Yang</b>, <a target="_blank" href="http://web.cs.ucla.edu/~yzsun/">Yizhou Sun</a>, and Chunping Wang.
A Unified Network Embedding Algorithm for Multi-type Similarity Measures.
In <i> 1st International Workshop on Graph Representation Learning and its Applications</i>
(<a target="_blank" href="https://cikm-grla.github.io/">GRLA'19</a>), 2019.
[<a target="_blank" href="works/unified/unified_embedding_feng19.pdf">PDF</a>]
[<a target="_blank" href="bibtex/29.html">BIB</a>]
<!-- [<a target="_blank" href="https://arxiv.org/pdf/1904.13341.pdf">long version on arxiv</a>] -->
</li>
<font size="3">
<b>2018</b>
</font>
<li style="margin:10px"><b> Yang Yang</b>, Zongtao Liu, <a href="https://www.chenhaot.com/">Chenhao Tan</a>, Fei Wu, Yueting Zhuang, and Yafeng Li.
To Stay or to Leave: Churn Prediction for Urban Migrants in the Initial Period.
In <i> Proceedings of the Twenty-Seventh World Wide Web Conference</i>
(<a href="https://www2018.thewebconf.org/">WWW'18</a>), 2018, pages 967-976.
[<a href="works/migrant/migrant_churn.pdf">PDF</a>]
[<a href="works/migrant/migrant_www18.pptx">Slides</a>]
[<a href="data.html">Data</a>]
[<a target="_blank" href="bibtex/25.html">BIB</a>]
</li>
<li style="margin:10px"><b> Yang Yang</b>, <a target="_blank" href="https://www.chenhaot.com/">Chenhao Tan</a>, Zongtao Liu, Fei Wu, and Yueting Zhuang.
Urban Dreams of Migrants: A Case Study of Migrant Integration in Shanghai.
In <i> Proceedings of the 32nd AAAI Conference on Artificial Intelligence </i>
(<a target="_blank" href="http://www.aaai.org/Conferences/AAAI/aaai18.php">AAAI'18</a>), 2018, pages 507-514.
[<a target="_blank" href="works/migrant/urban_dream.pdf">PDF</a>]
[<a target="_blank" href="bibtex/24.html">BIB</a>]
</li>
<li style="margin:10px">Lekui Zhou, <b> Yang Yang</b>, <a target="_blank" href="http://ink-ron.usc.edu/xiangren/">Xiang Ren</a>, Fei Wu, and Yueting Zhuang.
Dynamic Network Embedding by Modeling Triadic Closure Process.
In <i> Proceedings of the 32nd AAAI Conference on Artificial Intelligence </i>
(<a target="_blank" href="http://www.aaai.org/Conferences/AAAI/aaai18.php">AAAI'18</a>), 2018, pages 571-578.
[<a target="_blank" href="works/dynamictriad/dynamic_triad.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/luckiezhou/DynamicTriad">Code</a>]
[<a target="_blank" href="bibtex/23.html">BIB</a>]
</li>
<li style="margin:10px">Rui Feng*, <b> Yang Yang*</b>, <a target="_blank" href="https://vachelhu.github.io/">Wenjie Hu</a>, Fei Wu, and Yueting Zhuang.
Representation Learning for Scale-free Networks.
In <i> Proceedings of the 32nd AAAI Conference on Artificial Intelligence </i>
(<a target="_blank" href="http://www.aaai.org/Conferences/AAAI/aaai18.php">AAAI'18</a>), 2018, pages 282-289 (*: equal contribution).
[<a target="_blank" href="works/scalefree/scale_free_network_embedding.pdf">PDF</a>]
[<a target="_blank" href="https://github.com/rhythmswing/DP-Spectral">Code</a>]
[<a target="_blank" href="bibtex/22.html">BIB</a>]
</li>
<li style="margin:10px">Yupeng Gu, <a target="_blank" href="http://web.cs.ucla.edu/~yzsun/">Yizhou Sun</a>, Yanen Li, and <b>Yang Yang</b>.
RaRE: Social Rank Regulated Large-scale Network Embedding.
In <i> Proceedings of the Twenty-Seventh World Wide Web Conference</i>
(<a target="_blank" href="https://www2018.thewebconf.org/">WWW'18</a>), 2018, pages 359-368.
[<a target="_blank" href="works/ge/www_2018_rare.pdf">PDF</a>]
[<a target="_blank" href="bibtex/21.html">BIB</a>]
</li>
<li style="margin:10px">Menghan Wang, Xiaolin Zheng,<b> Yang Yang</b>, and Kun Zhang.
Collaborative Filtering with Social Exposure: A Modular Approach to Social Recommendation.
In <i> Proceedings of the 32nd AAAI Conference on Artificial Intelligence </i>
(<a target="_blank" href="http://www.aaai.org/Conferences/AAAI/aaai18.php">AAAI'18</a>), 2018, pages 2516-2523.
[<a target="_blank" href="works/others/cf_social_exposure.pdf">PDF</a>]
[<a target="_blank" href="bibtex/20.html">BIB</a>]
</li>
<li style="margin:10px">Jun Feng, Minlie Huang, Li Zhao, <b> Yang Yang</b>, and Xiaoyan Zhu.
Reinforcement Learning for Relation Extraction from Noisy Data.
In <i> Proceedings of the 32nd AAAI Conference on Artificial Intelligence </i>
(<a target="_blank" href="http://www.aaai.org/Conferences/AAAI/aaai18.php">AAAI'18</a>), 2018, pages 5779-5786.
[<a target="_blank" href="works/rl_denoisy/reinforcement_learning_for_relation_classification_from_noisy_data.pdf">PDF</a>]
[<a target="_blank" href="bibtex/19.html">BIB</a>]
</li>
<font size="3">
<b>2017</b>
</font>
<li style="margin:10px"><b>Yang Yang</b>, Jie Tang, and Juanzi Li.
Learning to Infer Competitive Relationships in Heterogeneous Networks.
In <i>ACM Transactions on Knowledge Discovery from Data </i>(<a target="_blank" href="http://tkdd.acm.org/">TKDD</a>), 2017.
[<a target="_blank" href="works/competitive/TKDD-Yang-et-al-Competitor.pdf">PDF</a>]
[<a target="_blank" href="bibtex/18.html">BIB</a>]
</li>
<li style="margin:10px"> Yuxiao Dong, Nitesh V. Chawla, Jie Tang, <b>Yang Yang</b>, and Yang Yang.
User Modeling on Demographic Attributes in Large-Scale Mobile Social Networks.
In <i> ACM Transactions on Information Systems </i> (<a target="_blank" href="http://tois.acm.org/">TOIS</a>), 2017, Volume 35, Issue 4.
[<a target="_blank" href="works/whoami/TOIS17-Dong-et-al-User-Modeling-on-Demographic-Attributes.pdf">PDF</a>]
</li>
<li style="margin:10px">
Xinyang Jiang, Siliang Tang, <b>Yang Yang</b>, Zhou Zhao, Fei Wu, and Yueting Zhuang.
Detecting Temporal Proposal for Action Localization with Tree-structured Search Policy.
In <i>Proceedings of the 25th Conference on ACM Multimedia </i> (<a target="_blank" href="http://www.acmmm.org/2017/">ACM Multimedia'17</a>), 2017, pages 1069-1077.
</li>
<li style="margin:10px"><b>Yang Yang</b> and Jie Tang.
Computational Models for Social Influence and Diffusion
In <i> Proceedings of the 26th International Joint Conference on Artificial Intelligence </i>(IJCAI'17), 2017. <a target="_blank" href="ijcai2017tutorial.html">(Tutorial)</a>
</li>
<font size="3">
<b>2016</b>
</font>
<li style="margin:10px"><b> Yang Yang</b>, Jia Jia, Boya Wu, and Jie Tang.
Social Role-Aware Emotion Contagion in Image Social Networks.
In <i> Proceedings of the 30th AAAI Conference on Artificial Intelligence </i>
(<a target="_blank" href="http://www.aaai.org/Conferences/AAAI/aaai16.php">AAAI'16</a>), 2016, pages 65-71.
[<a target="_blank" href="works/roleemotion/role-emotion-aaai16.pdf">PDF</a>]
[<a target="_blank" href="http://hcsi.cs.tsinghua.edu.cn/static/web/opendata/flickr.html">Data</a>]
</li>
<li style="margin:10px">Jun Feng, Minlie Huang, <b> Yang Yang</b>, and Xiaoyan Zhu.