-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html
190 lines (150 loc) · 7.91 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
<html>
<link rel="stylesheet" type="text/css" href="assets/css/embecosm.css" />
<header class="header-bar">
<div class="h-info">
<strong>The Open Source Software Tool Chain Experts</strong>
</div>
<div class="h-contact">
<a href="mailto:[email protected]">[email protected]</a>
</div>
</header>
<nav class="navbar navbar-default">
<div class="navbar-wrapper">
<div class="navbar-header">
<img class="navbar-img" src=assets/images/dcEmb-logo.png
alt=dcEmb-logo>
</div> <!--navbar-header-->
<div class="navbar-menu">
<ul>
<li a href="https://www.embecosm.com"> Embecosm</a></li>
<li class="active" a href="/">Home</a></li>
</ul>
</div>
</div>
</nav> <!--navbar-->
<header class="banner-default">
<div class="row">
<div class="text-wrap">
<h1 class="entry-title">Embecosm
<div class="entry-subtitle">
Open Source, High Performance Dynamic Causal Modeling
</div>
</h1>
</div>
<img class="banner-img" src=assets/images/ai-logo.png
alt=ai-logo>
</div> <!--row-->
</header>
<body>
<div class="content-container">
<div class="content">
<div class="section-title">
<h2 id="dynamic-causal-modeling">Dynamic Causal Modeling</h2>
<p>Embecosm provides the first commercially robust, high performance implementation
of Dynamic Causal Modeling (dcEmb), a revolutionary Machine Learning technique
that provides high levels of interpretability and explainability, most
notably in time series data.</p>
</div>
</div>
<div class="block-section-container">
<div class="split-3-container">
<div class="split-content">
<p><img src="assets/images/icon1.jpg" alt="icon1" /></p>
<h3 id="explain-solutions"><a href="#explainability">Explain Solutions</a></h3>
<p>dcEmb evaluates solutions in an intrinsicly explainable way that accounts for
how every part of an outcome is related to every other part.
</div>
</div>
<div class="split-3-container">
<div class="split-content">
<img src="assets/images/icon2.jpg" alt="icon2" /></p>
<h3 id="explore-outcomes"><a href="#exploration">Explore Outcomes</a></h3>
<p>dcEmb calculates not just a single, best, solution but the relative likelihood
of all solutions.
</div>
</div>
<div class="split-3-container">
<div class="split-content">
<img src="assets/images/icon3.jpg" alt="icon3" /></p>
<h3 id="encode-knowledge"><a href="#evidence">Encode Knowledge</a></h3>
<p>dcEmb provides a systematic way of pre-encoding knowledge into models,
preventing it from having to learn complicated or nuanced datasets fully
from scratch.</p>
</div>
</div>
</div>
<div class="content">
<h2 id="explain-solutions-1">Explain Solutions</h2>
<p>dcEmb is highly focused on explainability, especially in the challenging domain
of time series analysis.</p>
<h3 id="what-is-explainability">What is Explainability?</h3>
<p>Explainability in AI refers to the ability to understand and interpret the
decisions and predictions made by artificial intelligence systems. It involves
providing insights into how AI models work and the factors that influence their
output. Explainability is crucial for building trust in AI systems and ensuring
they make fair and unbiased decisions. It also allows developers and users to
identify and correct errors or biases in the models.</p>
<h3 id="case-study-amazon">Case Study: Amazon</h3>
<p>In 2018, Amazon was developing an artificial intelligence system to help
automate the hiring process for job openings in their company. The goal was to
create a system that could efficiently filter through the thousands of job
applications they received and identify the most promising candidates for
further review by recruiters.</p>
<p>The AI was designed to scan resumes and evaluate applicants based on
qualificatio ns, skills, and experience. It was trained on a dataset of resumes
submitted to Amazon over a 10-year period, which contained information on
candidates’ educational backgrounds, work history, and other relevant factors.</p>
<p>However, the project ran into trouble when the team realized that the system was
biased against female candidates. The AI was found to favor male applicants and
downgrade resumes that contained keywords associated with women, such as
“women’s” or “female.” It was also less likely to recommend female candidates
for technical roles, which are typically male-dominated.</p>
<p>The issue stemmed from the fact that the training dataset was heavily skewed
towards male applicants, reflecting the gender imbalance in the tech industry.
As a result, the AI learned to associate male applicants with the traits and
qualifications that were most commonly found in the dataset, while downgrading
resumes containing keywords associated with women.</p>
<p>The flawed system was eventually scrapped, and Amazon’s HR department continued
to use human recruiters to screen job applications. The incident highlighted the
risks of using AI for hiring decisions and the importance of ensuring fairness
and transparency in AI systems. It also prompted calls for more diversity and
inclusivity in the tech industry, both in terms of hiring practices and the
development of AI systems.</p>
<h3 id="dcemb-and-explainability">dcEmb and Explainability</h3>
<p>how dcemb does explainability</p>
<h2 id="explore-solutions">Explore Solutions</h2>
<p>dcEmb calculates not just an individual “best” solution, but the relative
likelihood of all solutions.</p>
<h2 id="what-is-uncertainty-in-ai">What is uncertainty in AI?</h2>
<p>Uncertainty quantification is a crucial concept in artificial intelligence (AI)
that measures the level of confidence or uncertainty associated with AI models
and their predictions. This helps identify and address sources of bias or error
that can arise from incomplete or noisy data and ensure accurate and reliable AI
applications. Additionally, understanding the level of uncertainty allows for
transparency and trust in AI systems, providing users with more information
about how the models work and the factors that influence their output.</p>
<p>Furthermore, uncertainty quantification is essential for safety-critical
applications, such as autonomous vehicles or medical diagnosis systems, where
incorrect predictions can have serious consequences. By providing insights into
the sources of uncertainty and potential errors, uncertainty quantification
helps to ensure that appropriate safeguards are in place to address potential
inaccuracies. In summary, uncertainty quantification plays a vital role in the
accuracy, transparency, and safety of AI applications, making it an essential
concept for the development of trustworthy and reliable AI systems.</p>
<h2 id="encode-knowledge-1">Encode Knowledge</h2>
<p>Being able to encode prior knowledge into AI systems is essential for developing
intelligent systems that can learn efficiently and make accurate predictions.
Prior knowledge, such as expert knowledge or domain-specific information,
provides valuable insights that can help guide learning and decision-making in
AI systems. By incorporating prior knowledge, AI systems can make more accurate
predictions and reduce the amount of data required for training.</p>
<p>Furthermore, encoding prior knowledge into AI systems can help to improve their
transparency and interpretability. By explicitly incorporating prior knowledge,
it is possible to explain the reasoning behind an AI system’s predictions and
provide insights into the factors that influence its decision-making. This can
help to build trust and confidence in AI systems, especially in applications
where the stakes are high, such as healthcare or finance.</p>
</div>
</div>
</body>
</html>