Skip to content

Latest commit

 

History

History
4123 lines (2162 loc) · 192 KB

CG-04.md

File metadata and controls

4123 lines (2162 loc) · 192 KB

WebAssembly logo

Table of Contents

Agenda for the April meeting of WebAssembly's Community Group

  • Host: Fastly, San Francisco, CA

  • Dates: Tuesday-Thursday April 10-12, 2018

  • Times:

    • Tuesday - 10:00am to 5:00pm (breakfast 9-10am)
    • Wednesday - 10:00am to 5:00pm (breakfast 9-10am)
    • Thursday - 10:00am to 5:00pm (breakfast 9-10am)
  • Location: 475 Brannan Street, San Francisco, CA 94107

  • Wifi: Signs will be posted in the room.

  • Dinner:

    • Date: Wednesday, April 10, 2018
    • Time: TBD
    • Location: Tres, 130 Townsend St (Tentative)
  • Contact:

Registration

Registration form

Logistics

  • Where to Go: Fastly, 475 Brannan Street, San Francisco, CA 94107 (37.778549, -122.395660)
  • Where to Park:
    • The parking garage is located within the building in the basement. The parking garage is open 7:00 am - 8:30 pm. It is $2.50 for 20 minutes/ $17.50 for the day.
    • Other parking garages are nearby.
  • How to access the building:
    • Check in with reception at the first floor. After signing in, take the stairs or elevators to the third floor where the Fastly Reception desk (ste 300) will hand out visitor badges.
  • Public transit information:
    • Bart: If coming from the East Bay, take any train towards San Francisco. If coming from the Peninsula (Daly City, Colma, Millbrae, SSF, Airport), any train will take you to SF. Get off either at the Powell or Montgomery stops. You can either take the 30 or 45 bus or it is about a 15-20 minute walk down 3rd street (if at Montgomery stop) or 4th street (if at Powell stop). Note: you can also take the muni, from the stations located in the Bart station as well. See Muni directions.
    • Muni: Take the N Judah towards Ball Park or T Third Street towards Sunnydale/Bayshore. Get off at 4th Street/King for both trains. Head west toward 4th St. Turn left onto 4th St. Turn right on Brannan St.
    • Caltrain: from the South Bay: get off at San Francisco @4th and King Street. Head northwest on 4th St toward Townsend St. Turn right onto Brannan St

Hotels

List of near by hotels:

  • Hotel Via (Closest hotel to the office. About a seven to ten minute walk.)
  • Hotel Union Square
  • Axiom Hotel
  • Hotel Zetta
  • Villa Florence
  • Hotel Zelos
  • Marriott Marquis
  • Marriott Courtyard

Agenda items

  • Tuesday - April 10th
    1. Opening, welcome and roll call
      1. Opening of the meeting
      2. Introduction of attendees
      3. Host facilities, local logistics
    2. Find volunteers for note taking
    3. Adoption of the agenda
    4. Proposals and discussions
      1. Multi values (Andreas Rossberg)
        • Status update
      2. Reference types (Andreas Rossberg)
        • Presentation of proposal
        • Open design questions
        • Move to stage 2 or 3?
      3. Experience report: Porting Mono to WebAssembly (Rodrigo Kumpera)
        • Rodrigo will discuss the challenges involved in bringing Mono + .Net to WebAssembly with Blazor.
        • He'll also discuss feature constraints that might inform how the Managed Objects proposal should adapt to support .NET well.
      4. Create sub-groups for the CG.
        • Who should lead each of them? What's that person's role? The champion seems like an obvious fit.
        • How do we scope their charter, what's in scope for them?
        • How do they drive consensus?
        • How do they communicate work to the wider group, and how does the wider group adopt their recommendations?
        • Which projects should get sub-groups for now? What's our criteria?
        • POLL: The CG should have sub-groups.
      5. Nontrapping float-to-int conversions (Dan Gohman)
      6. Memory model for threads (Andreas Rossberg & Conrad Watt)
        • Update on challenges and progress so far
      7. Discussion on Status of Threads Proposal
        • Status update from implementors.
        • POLL: move to stage 4?
      8. Annotations (Andreas Rossberg)
        • Presentation of proposal
        • Move to stage 1?
      9. Default value of WebAssembly.Global
        • Note: schedule this after 1PM so that Dan Ehrenberg can call in. Note-note: discussed early because we ran out of topics and this seemed simple.
        • Currently new WebAssembly.Global({type: 'f32'}) has the value NaN
        • POLL: Should we change this to +0.0?
    5. Adjourn
  • Wednesday - April 11th
    1. Opening, welcome and roll call
      1. Opening of the meeting
      2. Introduction of attendees
      3. Host facilities, local logistics
    2. Find volunteers for note taking
    3. Adoption of the agenda
    4. Proposals and discussions
      1. Tail calls (Andreas Rossberg)
        • Discussion of open issues, esp around typing
      2. JS type API (Andreas Rossberg)
        • Presentation of proposal
        • Move to stage 1?
      3. WebAssembly in Blockchains (Sina Habibian)
        • Blockchains have distinct requirements as a non-web platform.
        • We will use this session to give an overview of approaches and challenges to the wider community.
        • We will present compiled notes from projects within the space (Ethereum, Parity, Dfinity, and Truebit).
      4. Mozilla / Igalia want to start landing JS / Web API tests in wpt
        • Note: schedule this after 1PM so that Dan Ehrenberg can call in.
        • The goal is to entirely cover JS / Web API specs; writing tests to specs.
        • Ok to land in mozilla-central (and uplift via regular two-way sync)?
        • Anyone else want to review before landing?
      5. ES module integration (Lin Clark)
        • Note: schedule this after 1PM so that Dan Ehrenberg can call in.
        • Discussion of open issues, especially around cycles
      6. Addition of signed and zero extended SIMD integer loads (Rich Winterton)
        • Presentation of issue and proposal
      7. Garbage collection (Andreas Rossberg)
        • Discuss road map, revisit in light of separated reference types proposal
        • Discuss scope for MVP
        • Discuss JS integration and mode of collaboration with TC39
    5. Adjourn
  • Thursday - April 12th (Web Assembly Working Group Meeting)
    1. Opening, welcome and roll call
      1. Opening of the meeting
      2. Introduction of attendees
      3. Host facilities, local logistics
    2. Find volunteers for note taking
    3. Adoption of the agenda
    4. Testing discussion continued from yesterday
    5. WebAssembly Crash Reporting API (Brad Nelson)
    6. Update on tooling + Math + bikeshed (Brad Nelson)
      • Status update on bikeshed + current spec.
      • W3C checker "fun"
      • Proposed bikeshed math syntax.
      • POLL: We should transition from sphinx to pure latex.
      • POLL: We should transition from sphinx text to bikeshed.
    7. Detailed Review of the WebAssembly First Public Working Draft:
      • JS interface
        • Note: schedule this after 1PM so that Dan Ehrenberg can call in.
        • Discussion of open issues + review (Mark Miller)
      • Web API
        • Note: schedule this after 1PM so that Dan Ehrenberg can call in.
        • Discussion of open issues + review (Mark Miller)
        • Some related issues
      • Core
        • Discussion of open issues on Spec Sections:
          • 1 - Introduction + Consistency (Limin Zhu)
          • 2 - Structure (Derek Schuff)
          • 3 - Validation (Luke Wagner)
          • 4 - Execution (Ben Smith + Stephen Herhut)
          • 5 - Binary Format (Wouter van Oortmerssen + Brad Nelson)
          • 6 - Text Format (Sam Clegg)
          • Appendix (Eric Holk)
    8. General Discussion on Status of Document
      • POLL: We should point move default link to Editor's Draft.
      • POLL: I am confident v1 of the spec should go to REC.
    9. Closure

Schedule constraints

Daniel Ehrenberg can call in only 1-5 PM on Tuesday and Wednesday, but available any time except 9-11 and noon-1 on Thursday. Must be present for WG JS interface and Web API discussions; would like to be able to attend testing, ESM and WebAssembly.Global discussions.

Dates and locations of future meetings

Dates Location Host
Monday 22 October until 26 October 2018 Nice, France W3C, TPAC

Meeting notes

Roll Call

  • Alex Beregszaszi, Ethereum / ewasm
  • Alex Danilo, Google
  • Andreas Rossberg, Dfinity
  • Arun Purushan, Intel
  • Aseem Garg, Google
  • Ben Smith, Google
  • Brad Nelson, Google
  • Chris Drappier, ???
  • Dan Gohman, Mozilla
  • Deepti Gandluri, Google
  • Derek Schuff, Google
  • Dominic Tarr, Secure Scuttlebutt
  • Eric Holk, Google
  • Everett Hildenbrandt, Ethereum Foundation
  • JF Bastien, Apple
  • Jacob Gravelle, Google
  • Limin Zhu, Microsoft
  • Lin Clark, Mozilla
  • Luke Wagner, Mozilla
  • Mark Samuel Miller, Agoric
  • Martin Becze, Dfinity
  • Michael Holman, Microsoft
  • Norton Wang, Dfinity
  • Pat Hickey, Fastly
  • Peter Jensen, Intel Corporation
  • Richard Winterton, Intel
  • Robbie Bent, Truebit
  • Rodrigo Kumpera, Microsoft
  • Sam Clegg, Google
  • Shu-yu Guo, Bloomberg
  • Sina Habibian, Truebit
  • TatWai Chong, Arm
  • Till Schneidereit, Mozilla
  • Vincent Belliard, Arm
  • William Maddox, Adobe Systems Inc.
  • Wouter van Oortmerssen, Google

Tuesday

Adoption of the agenda

Derek Schuff seconds.

Multi Value

Andreas Rossberg

Slides

Generalizing typing rules for instructions, blocks, and functions (multiple results instead of 1). Works well with stack machine. Little/no change for binary and text format.

Most issues settled. One thing still missing is JS API.

V8 has implementation, not a lot of effort for instruction/block results, and parameters.

Function results were harder.

JS and another implementation are what blocks stage 4.

JF: Did we poll to just make the JS API throw? [yes]

LW: use case for returning a tuple of values is host bindings

AR: we punted because we weren't sure about the details, and can fill them in later

JF: we voted last year, mostly for reflecting multiple results as arrays in JS

AR: We could defer or do it now.

[poll results]

Other options were return object with named parameters.

LW: Host bindings could override that, but default could be array.

BS: doesn’t have to be array, could be array-like, could be some other kind of indexable object.

If you have your own object you could do other stuff.

JF: conservative thing is to throw for now and figure it out later.

LW: could also throw by default, and have a host binding that allows to return an array.

It’s extra work, but is useful in the short term.

[JF explains poll process]

Poll: Should multiple return values of a function be reified as an array in the JS API by default?

[clarify: this specific proposal should not move as-is but become array]

Voting for things means support for a change, as opposed to keeping the status quo

How do we import array-returning JS functions?

Coerce the 0 value?

AR: If it’s declared as an i32-returning function, the value gets coerced.

With this change we’d figure out how to destructure when typed accordingly

SF F N A SA
2 10 5 2 0

2 against votes: EH: makes more sense to do small steps to move this forward, and then work out the details of the JS API as we go.

RK: makes more sense to discuss this along with host bindings.

Michael Holman: implementation in progress: multi values from blocks but not functions.

Toolchain requirement to move phase forward? DS: We’d like to add this to LLVM soon. Will be more likely for functions than blocks in the first version.

JS API requirements to move phase forward? The spec requirements are clear (and the spec work is done) but the JS tests are not in yet.

Reference types

Andreas Rossberg

Slides

Came out of host binding and GC proposals

Can only passed in from embedder: new versions of types that can be constructed in wasm will come on GC proposal

Does not necessarily imply GC in wasm; only needed if the embedder’s references are GCed

Anyref: is a value type and a table elem type (also makes anyfunc a value type, a subtype of anyref)

Moreover elem type becomes the same as a ref type

[value type hierarchy with number vs reference types]

New instructions for creating null value, checking for null, comparing (anyeqref only), get/set in tables.

Not all refs should be comparable (e.g. JS strings only have structural equality and not reference equality). Also functions? (reference equality for functions might inhibit optimizations)

So anyeqref is a subtype that can be compared.

EH: are there use cases for comparing equality?

AR: JS objects, anything with identity. If source languages have object ref equality. Pointer equality

Tables: introduce multiple tables. Table instructions now have table_index immediate

MM: call_indirect has immediate of the table’s index? Yes

AR: call_ref is deferred until types are more explicit

Would probably want a separate table for functions and refs. Call_indirect requires an anyfunc table.

MM: anyfunc tables are all values created in wasm. Anyref’s are created by host [?]

BN: will you always need cast to use anyref (or keep different tables?)

WVO: equality concern pulls JS implementation concerns into the design. Could this be simpler (e.g. require refs to have object identity)? Keeping the implementation bits hidden would make things expensive. The hope is to make this a simple wrapper around host pointers.

MM: host could provide a comparison function, used by wasm. It could encode the hosts limitations on which objects can be compared/how

LW: eventually want just one instruction rather than call import: part of the question is whether we do extra work to have the intermediate state

MM: One of the correctness criteria, the host language can validly assume that wasm cannot circumvent the constraints of the host language.

AR: I agree it’s not desirable to have wasm modify host language objects in ways that the host language can’t.

RK: This seems too limited to be useful as-is (e.g. mapping JS objects to C# objects), we’ll have to figure out what the correspondence is with a table in JS.

MH: idea of host giving an equality function is good. Different host objects have a different idea of equality.

JF: Most of the discussion is "do we want to do this now or later" would be good to test in non-JS contexts

MM: Once it’s possible to create anyeqref values in wasm, then we’d want to compare them in wasm

[discussion about whether webassembly equality needs to be fast, or should always use equality imported from host]

BT: One property we have is instructions are like machine instructions, corresponding to small/fast things

BN: are we really going to only have one of these comparison instructions?

WM: I’m concerned that we’ll introduce new types for equality, similar to ML, where haskell used type classes, and made it much cleaner.

AR: We don’t have polymorphism or function overloading, so those don’t come up.

BN: if were building host bindings on these we’d have to know at the boundary which types are comparable. Might just want a single table.

BT: practically numbers and strings won't be anyeqref. Only objects and symbols

MH: Symbols?

AR: They have reference identity.

Hard to use ref types in rust and C++

TM: How do you distinguish between passing a number and an anyref in your source language?

Wasm-bindgen makes generated bindings that maintain the bijection and tracks the objects for you. There’s a mozilla blog post that explains more.

[10 minute break, turned into lunch]

[Presentation continues]

Proposal introduces subtyping - non-coercive, meaning values are unchanged when converting

Simple extension to validation algorithm - checks for subtype rather than type equality

Type of ref.null: 2 options:

  1. Use subtyping (i.e. a nullref type is a subset of all others) (at least, all nullable ref types - not all types will be nullable)

  2. Type annotation on the instruction - requires an extra predicate check during validation once there are non-nullable types

JFB: You imagine we’ll have non-nullable types?

AR: Yes, null will be the bottom type of nullable types, but we may also have structural types that are non-nullable. The full graph is a DAG

LW: it’s odd that there is a type that’s only internal, and not reflected in the binary etc

AR: disagree, it seems more natural to just define an extra type and let the subtyping logic determine everything (rather than keeping an extra thing on the side)

MM: wrt non-nullable types, currently every type needs a 0 value. Non-nullable types don’t have one.

AR: That will come up with future extensions, for example function references. Function reference types will not have null, so we need to handle non-nullable types.

You could allow initializers with locals

This may not be enough, because at the start of the function you don’t necessarily have the required information. Scoping this proposal says that this is not included.

BS: is this similar to stack-polymorphic checking where there’s a fake internal type that only exists for checking?

LW: yes sort of

AR: that’s more like a type variable, but there are similarities

JS API: can now pass values back and forth

Only wasm-exported functions and null match anyfunc because otherwise there’s no way to tell its type. Future proposal (tomorrow) fills this in.

JS objects, functions, symbols match anyeqref

MH: Until recently we didn’t have a global symbol registry, so the symbols weren’t interned.

MM: no observable difference whether the symbols or strings are actually interned vs the comparator compares them as-if they were, the anyeq ref leaks this into wasm

MM : you have null, what about undefined, false

AR: null has a meaning inside wasm, the others don’t

MM: Why does JS null map to wasm null?

AR: Why not?

MM: undefined actually matches wasm use case better, since it means uninitialized

AR: We already decided that null in tables maps to JS null

JF: can we discuss this further in a GH issue? Hard to follow live for folks who don’t know the JS semantics as well, but i want to follow when i can get more context.

AR: You could also include bool values, I don’t care either way. Harder to decide about anyref, currently we have the same set as anyeqref, including strings. But it could be any JS value.

Open question: allow all JS values for anyref (vs just anyeqrefs + strings)

BT: might be good to allow the engine to have different representations in wasm vs JS

LW: How would a JS number work, and NaN canonicalization?

AR: Any value, you don’t do anything to it, just pass it through.

BT: You might want to have compressed pointers in wasm, anyref might be a compressed 32-bit pointer.

JFB: Into what?

BT: Just a ref, just represented as 32-bit. So you would need to box values that don’t fit into that size (JS Number)

AR: would be easier for JS↔wasm binding if you could just pass anything through

LW: There is some short-term utility to having anyref be our JS any type

BT: But we may get locked into it

AR: this proposal is conservative in that it excludes the problematic ones.

Also allowing anything makes it cheaper on the boundaries, no checks needed.

Status: has prose and formal spec, interpreter and tests, with a JS API proposal as outlined here. Ready for stage 2 or 3?

BT: beyond adding anyref tables, we’ll want weak anyrefs. We’d said weakness is a property of the table, but it could also be a property of the type. Interacts with JS weakref proposal

MM: this is a can of worms.

MM: what does it look like when wasm invokes an imported JS function.

AR: same as currently, with imports?

BS: are you talking about call_ref?

AR: you mean imports or functions you get first class? The only thing you can do with an anyfunc currently is to put it in a table and call_indirect

BS: storing anyref in a global?

AR: it’s just a value type, so yes (globals, locals, params, results, operands)

Remaining steps: resolve null typing, resolve JS values allowed, assign opcodes, implementations, JS API tests.

Stage 3 requires the JS API tests, but no implementation in JS yet

JF: it’s not really a core part of the proposal (if there were other embeddings, we’d have different API requirements). Also interesting what this looks like for non-web embeddings. Does nullable vs non-nullable impose restrictions on those?

AR: This also came up for dfinity, we had something similar to host bindings, also messy. We have an environment were the reference types live in a wasm-facing API. We haven’t decided whether we want null or not. We can abstract over the reference types, so it’s relatively simple.

Hard to say what this would look like for a C embedding probably just raw pointers.

BS: there is wasm2c

Would all global anyrefs be initialized as null? AR: globals have explicit initializers. Could use null or an imported global. Non-nullable types are more interesting. Also applies to locals and table elements, etc but we’ll punt this until the future proposal.

AR: future extensions, we can add call_ref and ref.func for using typed function references. The function type would not be nullable.

JF:something that’s not exported, you an make a value and then export that

AR: you can already do that, but this types it statically. Here you can also have homogenous tables. But once you have first-class ref types you don’t actually need tables at all anymore.

LW: There could be an optimization for asm.js having homogenous tables.

MM: putting them in tables also retains them after returning them, partitioning by type seems reasonable.

AR: For first proposal, probably better to leave out subtyping on function types. Things like call_ref would need to have more type-checking.

DS: traps are not like JS exceptions, we don’t encourage relying on trapping behavior.

AR: We should make a strong statement here saying we don’t guarantee this.

BT: we could make subtyping opt-in

LW: I don’t want to add extra subtyping checks on a hot instruction

Other future extension: type imports/exports, to distinguish different host types

Poll: push to stage 3?

JF: stage 2 requires spec text, no formal required. Exit requires tests that can be run against an impl (you have the interpreter). Seems you have the requirements for 3. Need more implementations, which is to be done in stage 3.

BT: V8 has started on the implementation.

Poll: Move the proposal to stage 3

MH: still concern about anyeqref. A lot of our existing comparisons for strict === are not just pointer comparisons.

AR: Things you can’t pointer-equal should be rejected at the boundary of JS/wasm

MH: One of those for us is null. In most cases we’ve marshalled null, but sometimes we may have not done so. So we’ll have to compare the distinct null objects.

JF: you said forbidding certain types made things simpler, but this indicates that it would make things worse in some cases

LW: one option could be not having this until we have a wasm type to reflect it

AR: At the boundary, you could do a conversion.

MH: True, just bringing it up

LW: It becomes more valuable when we have source languages that have reference types that need to have pointer comparison.

BT: If we don’t have anyeqref, how does it get added later?

MM: the criterion for introducing a type/instruction, would be when we have a wasm typed value that needs to be equality-checked

Should we poll dropping this from the proposal? Can do that as part of stage 3, don’t need to do it now.

POLL: move reference types to Stage 3.

SF F N A SA
9 14 0 0 0

Experience report: Porting Mono to WebAssembly

Slides

Mono is a .net runtime, FOSS since 2001. Ported to mobile phones, multiple OS, game consoles.

Highly configurable to different needs. Different implementation strategies used, JIT etc.

Why MONO on WASM?

Run existing code in the browser. Package C# for use by JS.

Targets WASM MVP, maybe asm.js later.

High quality bindings with JS.

Current users: Ooui (similar to xaml apps), Xamarin.forms, Blazor

Implementation approaches:

First an interpreter, using Emscripten toolchain. Toolchain issues early on (fall 2017), gotten better. C library still a little iffy.

Most of the work in class library implementation. ~100k lines

Some issues: single-threaded, no I/O. could use emscripten layer but comes with some overhead. Bigger issue is single-threaded.

Not an issue for our users, when used to writing web apps.

Release runtime is currently ~1.7Mb wasm + 166kb JS.

Debug runtime is ~4.8Mb wasm + 875kb JS.

JF: Is that for a hello world app?

RK: no, the interpreter is fixed. Class bodies are a separate payload.

Toolchain very slow for quick one line changes.

Recompiling the runtime is not viable, interpreter helps here.

Debug runtime especially problematic since we can get no feedback from the browser as to what is slow.

AOT compiler:

Started w/ llvm x86 backend, custom libc glue. Didn’t leverage much from emscripten. Hope to use emscripten when it switches to llvm wasm backend.

Now using clang with wasm backend.

Build-times problematic for our developers, clang does better here. LLD very helpful here, were linking bitcode files previously.

10Mb for hello world app with no C# tree shaking. Includes 5-8Mb of IL. WebAssembly is pretty compact, native would be 2x larger.

Debugging:

Chrome only, proxy for v8 wire protocol. Works well, but want to support more browsers, would be good to have standard cross-browser way.

JF: there was a proposal for more than source map debugging.

JF: It would be part of the tooling effort...

DS: Right now you have source maps and the devtool protocol, other browsers implement too, but I would like to have the wasm work more like what you’ve implemented. But we need to coordinate.

RK: It could be injecting in the protocol, or writing a browser plugin

DS: Chrome exposes debugger protocol to extensions through API.

RK: I don’t think source maps is the answer, even dwarf is maybe not enough. We do something similar for mono w/ dwarf, but it gives only a very basic debugging experience. You don’t get the ability to inspect properties, async behavior, etc. We need to coordinate w/ existing debugging tools. We don’t want to have a wasm debugger in isolation, it needs to work w/ existing JS debugger. People will be writing JS and C# together, so needs to coordinate.

Garbage Collection:

Mono requires conservative stack scanning, started using Boehm GC. Lot of work to use a fully precise scanner. Schedule next GC at certain memory load. At next frame when code yields to main thread. Works OK w/ current workloads, we assume that there is a bounded amount of work on main thread.

GC takes 350kb out of 1.7mb runtime.

GC spec has a lot of missing features that will make it hard for us to adopt. We're ok with a performance hit, will have a lot of benefits to adopt.

Problem is that object model doesn't match well. For example doesn't represent aggregates well, they may have to be boxed.

No inner pointers is hard, used a lot in C#, We can probably work around it.

AR: these things are in the proposal, but only as future extensions.

Host API discussion missing. How to use it from an interpreter / reflection.

MM: What about GC finalization.

RK: If you’re familiar w/ the java model, .Net is much worse, you can arbitrarily register/unregister, weak references that can/can’t see finalization, much harder.

MM: What if wasm committed to a path that could never support .net finalization semantics, how would that affect things.

RK: I don’t think it’s particularly problematic, the major reason is to deal with native resources. AFAIK JS doesn’t require a lot of bookkeeping.

MM: what about legacy semantics?

RK: good thing about finalization that even users get it wrong, so it is probably fine. As long as we can get something reasonable.

RK: If it is difficult, we may just make it an option.

BN: You would be forced into a place where some users would not be supported

RK: hard to say how many users depend on the semantics. We may ship multiple options.

BN: Not sure how folks will use it, I suppose.

TS: Once you support multi-threaded code, it may become more difficult. If the runtime never supports that, is that an issue?

RK: We already have that problem on iOS, we can’t touch objective C objects from another thread. We don’t want to force fringe use-cases on the spec.

We could define a subset of .Net that has no finalizers.

AR: Are you suggesting that wasm could perhaps not provide finalization at all?

RK: Maybe? It’s hard to... the main thing we use finalization for, handling host bindings. If the GC handles it, it eliminates most of the need. Native object is a JS object, so the JS engine will already do the finalization. Not too concerned if the finalization doesn’t work properly.

JF: let’s move along from GC

Exceptions

Currently using setjmp/longjmp, it works, performance is meh (setjmp is slow but we don’t need to use it every frame the way you would in C++)

What we’d really like is trapping loads for object null checks

JF: Just base+offset, mprotect regions?

RK: two things that are different in C#, null checks that come from safe code. Unsafe code is fine if it does whatever. Not a big concern right now. Spec actually aligns well with managed language.

Interop

Currently requires explicit resource management on both ends, code is annoying.

Problem is actual resource management.

Rather wait for GC spec than create own interop monster. Need host support for GC integration.

TS: Why do you need the explicit GC proposal?

RK: On Android we had objects the existed on both sides, requires weak references, tell GC to start, block and wait to finish (or notify).

MM: Do you mark through the foreign object ref?

RK: Too burdensome

MM: Do you collect cross-world cycles?

RK: yes we do. I can explain how we do that on Android. Do not want to do that on browsers.

Threading

C# libs are not ready for it (lack of threading). Fixing on a case by case basis.

Users have been told to use async, so don't directly rely on threading.

May be a future requirement when the users are not just UI frameworks (e.g. games, etc)

Not super concerned about threading support.

The framework plumbing is not ready for threading, mostly transparent to users.

Browsers

Differences in compilation times between browsers. Long page loading times not acceptable.

300ms vs 18sec.

JF: can you file specific bugs with browser vendors. Easier for us to debug these loading time issues internally.

RK: only familiar with mobile so far, so still getting used to testing this for browsers. We did file some bugs.

Unnamed mobile browser decided to give up on wasm. Could debug with open source implementation, but no help in solving the problem.

JF: at least for us there is limits to JIT memory we allocate. 64 megs of code.

RK: still trying to figure out where problems lie. Emscripten generates huge tables of function pointers?

Corporate firewalls block .net wasm apps -- guessing that they grep for "This program cannot be run in DOS mode", even though it’s not a Windows exe.

Really interested in GC proposal. Will help with good bindings.

MH: what would be on your wasm feature wishlist?

RK: null checks, gc spec, beyond that just tooling. We have the most friction there. LLD helps. We would like to also have better spec’d ABI. We want to mix C# with C++ code.

JF: have you looked at the tooling repo?

DS: We’re doing it with the object format, have to come back and flesh out the C ABI.

JF: there's a tooling-conventions doc, and they do a weekly call.

RW: w.r.t. Asm.js, you said there are perf differences/ size differences?

RK: asm.js is strictly worse in every way. Maybe used as fallback, but want to avoid if possible.

10 meg wasm (asm I think?) file just crashed Node.

LW: how small you think a hello world can get down to when using wasm GC?

RK: if we take other products as comparison, we think get to 1/3 the size ~ 3 or 4Mb. We want to get to 2Mb, maybe too aggressive. There’s a lot of overhead, junk code. Easier to just ship it for now.

Opportunities to drop a lot of code, e.g. make our promise library just a shim on top of JS.

We ship a lot of i18n stuff, maybe can use the JS stuff shipped in browser.

JF: Have you looked at dynamic linking?

RK: We looked but it seemed like it wasn’t ready yet. Needs too many function pointers. Only use case would be during development.

Webassembly CG Subgroups

EH: We went over this the last few video calls, it won't be controversial hopefully. The problem we want to solve: many proposals in flight, many are large or intricate, they benefit from having high-bandwidth communication between interested parties but the bi-weekly CG calls are not the best venue.

EH: Subgroups can self-organize around the bigger proposals, e.g. garbage collection, exception handling. Hopefully the creation process is lightweight: needs a leader (proposal champion in cases when subgroup is around proposal). Subgroups can schedule CG calls and meetings, follow same requirements as those for notice and welcome to attend, Meet as needed, by whoever chooses to attend. Send updates periodically to the CG, possibly at CG meetings, at minimum for phase advancement.

Poll: The CG should have subgroups

AR: Is this poll setting up subgroups or deciding in general?

JF: this is just creating the process. Instead of proposal champion asking for phase advancement, just ask to create a subgroup. This should be an inclusive process - don't just ask people you know, the whole community group is notified and invited. Process should not be onerous, just inclusive. As wasm evolved and joined W3C, process involved from an email thread to this. GC is especially difficult and needs focused attention, so its a good use case for subgroup but would waste everyone else’s time if they are not interested in GC.

RW: How could those who weren’t in the SG follow along?

JF: Id like the subgroups to operate like this group, there are meetings with minutes that are published, and non-binding decisions get reported to CG. Parallel to C++ subgroups. Subgroups can vote on things there but never do, they trickle their recommendations to the parent group, this works well. Trust the experts in that group to get the details right, put the conclusions to the whole group, then every single discussion / bikeshed doesnt need to be revisited. Bikeshedding can happen in the subgroup because those are the experts that understand the issue best.

RW: Id like to make sure that when subgroups make a decision that they dont know has effects elsewhere, that there’s a way to address that.

JF: Yes, the subgroup can punt things over the fence to the larger group or another subgroup if it isn’t relevant to that group.

JF: If we look at the list of open discussion there are a few that are obvious to get subgroups - GC is one of them, Host Bindings may be part of that or different, ecmascript integration is a good subgroup, they might invite people who work on bundlers to that group.

Lin: (agreement)

JF: W3C doesnt have precedent for sub-groups, we are a very large CG group with a complex ecosystem. They say we can do this as long as it follows general spirit of the rules

EH: Assuming we go ahead w/ SG, then a group of people choose a leader and discuss at the next video call.

Poll: The CG should have sub-groups.

SF F N A SA
9 16 3 0 0

Nontrapping float-to-int conversions

DG:They are in stage 3 right now. The only thing to discuss is whether we should go to stage 4. There are impls in FF and Chrome right now, there may be others.

Toolchain we have LLVM support

BN: out of curiosity, any other browsers?

MH: We don’t yet.

JF: Don’t think we ever had contention on this issue. Unless someone thinks this is contentious it meets all criteria

AR: Only minor objection -- the naming convention is at odds w/ what we have currently.

JF: We don’t really talk about naming in the phase process -- but we’re ok with changing names. From a VM perspective, the names don’t matter. It would be nice to have stable names, it’s ok to change.

AR: at some point we should have a conversation on name conventions. Once this reaches the final spec the names should be stable

JF: Stage 4 means its mostly figured out and worthy of shipping. Any questions:

BS: At what point does it move to the spec proper?

DG: When the WG takes a separate vote. I primarily care whether it can ship

BN: We came to the conclusion that we would start a new release train.

AR: We are holding back merging new proposals right now as the logistics of the spec is worked on by WG

JF: That depends mostly on the cadence of WG vs CG.

AR: How far are we into 150 days? Dont want to reset that clock

BN: In terms of starting a new train? Each new train starts a new clock. Roughly it’s 5 months and we’re 3 months in.

POLL: we should move non-trapping float to int conversions to Stage 4.

SF F N A SA
10 7 7 0 0

Weak Memory Model for Threads

Andreas Rossberg presenting.

Slides

AR: The thread proposal implies a weak memory model. Collaborated on this with Conrad Watt. This brings shared-state concurrency to wasm. Weak memory semantics are an active research topic, you need a semantic model. Delicate interop with JS SharedArrayBuffer. This is Conrad Watt’s PhD Thesis, his advisor is Peter Sewell (Cambridge) one of the experts on this topic. Conrad deserves lots of the credit.

AR: Why is this hard? Still an open research problem. Subtle and difficult to validate. First line of works in this field figure out what CPUs actually implement, by reverse engineering, then find semantics that explain that. Hardware vendors didn’t tell what the semantics were. C++11 is the first to include a memory mode, which is fairly recent, and that model is known to have many problems (still). In Wasm we have novel problems that are not in C++ nor in Ecmascript. Most of the previous work was done in an axiomatic fashion (superimpose constraints on informal description of language), for wasm we fortunately have a complete operational semantics, so we have to link these things. There’s some work to give the operational semantics of weak memory but there exist things it can't model (same for the axiomatic semantics).

Conrad is the real expert on the topic, I hope he will find time to attend the next in-person meeting.

AR: What is not in previous models: growing memory is totally novel, implications include failed memory accesses need to be ordered to take into account that memory might grow later. Host functions: In the wasm spec we have host functions that you call into, this behavior is axiomitized (spec says what it can do to abstract machine e.g. manipulate the store in a way that fits some assumptions), with threads this becomes more complicated because there are interleavings between a host function call and the wasm itself, abstract axioms of the host functions behaviour are not sufficient. Can also spawn threads.

Initially we thought we could take the SAB spec and adapt it to our use case, however growing memory and host functions are not in there. ES spec glosses over how threads are actually created (assumes they are just "there"). We have a way to create new threads dynamically. ES spec doesn't formally describe how memory is created. Need to extend over SAB model. Also conrad found some minor things in ES model that need to be fixed.

Various things that are not in C++ either- growing memory, host functions. Biggest thing is that we have fully defined behavior. C++ eliminates all tricky cases by making them UB. We can allow behavior to be non-deterministic but it should be defined. No catch-fire semantics.

Mixed-sized atomic memory access is also something in almost no prior work, first proper investigation in POPL17.

MM: The reason why JS has that, is that we couldn’t find a way to avoid it.

AR: Neither can we. The POPL17 paper looks more at what actual hardware does. This model is weak and the ES model is even weaker (maybe than it has to be).

MM: Concrete suggestions on how the ES model can be stronger? Would be interested in taking to TC39

AR: Already do, Conrad made some proposals. Fixing wait/wake... perhaps a few other things. Some bugs probably.

AR: Mixed size atomic accesses are difficult, (gives example) You can see and un-see an atomic store, which is weird and incompatible with any operational model, but not known how to strengthen it.

BN: it’s not any weaker than the hardware + optimizing compiler which can reorder the load.

AR: All models so far it was just not clear when you would see a change finally, never that a history would be reversed for atomics

SYG: Mixed-size atomics, they’re weakened in the JS model to be non-atomics, which are stronger than C++ non-atomics. It’s the same set of weak behavior as non-atomics. So you’re saying that, there’s something stronger than this in hardware but weaker than atomics?

AR: Likely no hardware that would expose this (rewriting history) behavior, and I dont think it should. Though paper gives some examples of unexpected things.

SYG: To add on, one of the things that was very difficult -- understanding the boundary between hardware and smart compiler. Where does wasm want to put this boundary?

AR: Thats an excellent question

BN: What concrete problem does ... the main reason is that we don’t want undefined behavior.

AR: We want to start conservative (weak), moving forward the problem is we would not be able to model it in an operational fashion, perhaps like to strengthen it but dont have a concrete suggestion of how.

BN: Put it this way, if we ended up in the situation that we have an optimization that we can’t use because of the model, we’d weaken the model.

AR: You never know what crazy thing the next generation of HW will do, thats a general problem we cant anticipate.

BN: I’m curious what Shu thinks. I perceived that the model is as close as possible to UB, without going there.

SYG: We decided what the weakest hardware we want JS to run on, we said power but its basically ARM. It would be nice to make it as strict as possible. Wasm is so much closer to the metal so its governed by what hardware you want the raw stores and loads to run on.

JF: You want to go weaker, try GPUs. There’s a fully specified memory model, maybe not published?

BN: Do you think we need some sense of what the widest envelope is. We dont want to rule out some hardware… maybe a historical power chip?

JF: There are things in the C++ model that model (inaudible) that are inadvertent, will be fixed in 20, some of the mistakes in C++11 are useless programs that broke the model. We should look at the tests for those platforms and see if that aligns with your models.

AR: in general, we should stay on the conservative side, we’ll need to rely on research for anything else.

JF: We only have non-atomic and sequentially consistent, that simplifies allowed outcomes, which is silly but I dont really care.

AR: I agree. The more interesting thing...

VB: Do you have alignment issues?

JF:It traps if they’re not aligned.

BN: 8-bit loads...

[discussion about non-aligned and aligned atomic accesses, based on c++ source etc etc]

AR: More interesting thing is grow memory. Store beyond page boundary in one thread, twice, in other thread you grow the memory. Both are OOB before but not after grow. Should it be possible that the first store traps and the second succeeds? Probably we have to allow that.

BT: What happens on real hardware?

JF: The first one wouldnt get emitted ever, an optimizer would eliminate this.

AR: Can change example. One thing that has to happen... the grow has to be sequentially consistent with any failed access, it doesn’t have to be seq cst with successful accesses. Brad and I talked about how that matches w/ what implementations do. Still surprising and weird. So in general that means that you cannot reorder stores in a wasm JIT. In general you cannot make necessary analysis, if you can’t prove that both are within bounds, you can’t reorder them.

AR: It gets even more interesting when you have tearing store-at-boundaries vs grow, can you observe them partially

JF: The bulk memory operations would have them

AR: Yes, but they are sequences of single byte stores.

JF: Copy can go forward or reverse or whatever it wants, simd or whatever

AR: Tearing reads or writes are a single instruction and still have this issue

WM: I’m curious, do any implementations allow memory to grow while other tasks are running? We stop everyone at a safe point, and things wouldn’t work if we didn’t.

AR: Thats the main implementation method everyone uses, i dont know if we should require that though.

JF: For our engine, we have to implement both if we have to move memory. We have to stop the world and move memory. If you got a large allocation up front, then why would you just stop the world.

BN: We have the same issue

BS: Grow memory is infrequent, who cares if its slow?

BN: (inaudible)

WM: Dont want to go down the hole of open research problems if nobody is going to use it.

AR: ES spec probably needs changes around detaching, SABs are not supposed to detached, but with wasm grow_memory will detach them.

MM: I will bring up on thurs, re JS spec, the shenanigans for similar array buffers in JS and Wasm, where JS are non-growable and Wasm are growable, thats more costly to say they are the same abstraction and accept growable buffer into JS, get rid of the weird impedance mismatch between systems.

AR: I would be fine with that, but it might be too late.

: Alternative is instead of detach SAB just leave it

AR: Bounds checking on JS with segfault, wouldn’t you succeed with something you should fail on?

MM: On your criteria, turning errors into non-errors shouldn’t break old code, why is it too late to make changes to JS?

AR: Implementors in the room have to answer that.

[Discussion about DOM APIs dealing with growable / detaching SABs would be difficult, has issues with concurrency, concurrent SAB growth is very difficult]

MM: Implementation burden for webassembly are within what everyone is implementing, so if the JS abstraction was made the same does that increase or decrease burden?

BN: It’s already the case that we’re white-listing the cases that can take shared memory. But growth is a whole other thing.

LW: It would be interesting if all web apis just took views, so you would have an atomic sample of the .length or other things. Do any APIs take a SAB itself and not a view on them? (Besides post message)

BS: Can’t think of one

BN: Changing array buffers would have huge implications, since they’re shipped.

JF: Nobody has shipped shared array buffers /irony

MM: Havent we made this complexity ourselves by giving wasm and js SABs different semantics?

LW: I proposed this and was shot down

BS: Thats a large conversation that involves ES spec, DOM API

LW: We could agree to propose it

[Discussion of various concerns with proposing these changes more broadly]

AR: Somebody take this to TC39 to discuss further

MM: I will

AR: (back to topic) current status. We have a sketch of the operational model. Similar to how the ES spec does it, nondetermistically create memory events. No meta-theory yet, not integrated with the wasm spec document itself, need to describe ES interop, and tweak the ES spec. In summary it's a challenging problem but we roughly know now what we need to do. ETA later this year, thanks to Spectre for buying us more time. This might be worth a paper as well, since it involves novel things.

Discussion on Status of Threads Proposal

Slides

BN: Want to have a discussion about what all the implementers are thinking.

First lets start with SharedArrayBuffer. Google is shipping host fault isolation in chrome 67, stable by middle of year. Testing in beta at 100%, so far so good. Talking about re-shipping SAB after that, we would be reasonably confident about that once host fault isolation is shipped. We have many customers waiting for this using all kinds of features including SAB.

There seems to be a lot of stability on the core proposal and the tools.

(Edge): I think it would be irresponsible to re-enable until we have site isolation. We want to make sure the address space doesn't have anything sensitive in it.

BN: Site isolation isn't a binary thing, there are nuances

(Edge): Site isolation is a little ways off, we have a lot of resources on it but its not going to be here in the near future.

LW: Mozilla has more or less the same story as Edge. Some of these proposals make achieving site isolation easier - we still need some time to evaluate them.

JF: Apple put a statement on our blog. "What Spectre means for WebKit". That’s still the public plan of record as far as SAB.

BN: The other observation I had, at the high level summary theres been no changes since January, thats when we disentangled some of these sub-proposals. Because we have customers who want to experiment with SAB and Wasm threads, google is considering doing a origin trial by whitelisting their sites, but we’re aware thats a moral hazard in terms of open web. We want feedback from users of the whole stack.

The threads proposal is tagged as phase 1.

BS: I think we are at phase 2, but we are missing some spec test, since there are issues with memory models Andreas just discussed.

BN: All of the caviots Andreas just raised are a possible reason to delay. Do we need the memory model to be completely worked out before we proceed, or do we move ahead without it?

AR: We don't have a spec unless it has a memory model.

JF: The rest of the spec moved forward without operational semantics for a long time. The world of computing is just starting to figure out how to have operational semantics instead of handwaving.

AR: We didn't have a proposal process before. Not having the spec text won't stop you from shipping it, it just stops the spec.

BN: There’s one piece we can possibly disentangle, do we need to make sure the rest is disentangled before moving forward

AR: There's also no thread model in the spec, so there are other basic things missing. The spec is totally in terms of one thread, so there are ways that model needs to be extended, even without all the nuances of the memory model. We should at least change that.

BN: Spectre means we have time to get this sorted out. I have concern about the lack of prior art. We did something with Ecmascript, as perfect as we could make it.

So by the letter of the phases, we are still in phase 1

I want to hear about the users of SAB and the reaction you got by removing it.

LW: Mostly disappointed developers. I didnt hear of any exciting non-pthreads use cases.

BN: We still have lots to figure out.

Annotations for the Wasm text format

Andreas Rossberg

Slides

AR: This is simple in that it only affects the text format. The motivation is: the binary format has custom sections. There is no equivalent in the text, so there is no way to round-trip custom sections, and no way to express meta information (i.e. for host bindings annotations where you want a generic mechanism to ship them)

Proposal is to allow (@id …) annotations anywhere in the text. Just like a custom section we dont assign any meaning to it. Any tool that interprets it is free to do with them what they see fit (assign meanings to some ids).

Annotations allowed anywhere that whitespace is.The @ is the actual thing, the id is any identifier. The convention is it should correspond to the name of a custom section but its not required. The body is an arbitrary token sequence, could be anything as long as it is well-bracketed. Thats all the spec would define. A tool can define more concrete structure for its annotations, but like with a custom section its up to the consumer to deal with malformed ones or whatever.

[Example about generic custom section, repeats something BS put in proposal]

[Example about JS host bindings, put the JS conversions for paramaters as (@js unsigned) right next to the declaration of the identifiers. The host bindings spec could define the meaning of this syntax.]

BS: Theres an issue with the current text where you cant round-trip certain names

AR: You can use custom section name to round trip it, or you can put a name annotation on some construct, if you wanted to put one of those ways in the spec.

[Discussion about full round tripping]

AR: You cant really expect arbitrary annotations to roundtrip precisely

JF: Early on we decided that you won’t do full roundtripping of the text format, even without these annotations.

AR: a spec of a custom section could have a spec describing how the annotations relate to the binary, and the tools could implement those individually, there wouldn’t be a generic method to convert between them.

JF: These are effectively a different section, in the text format they can be interleaved.

SK: We could use these for the object format that LLVM is spitting out, which dont have a text representation at this time.

AR: This is at stage 0, I’d like a poll to move it to stage 1.

BN: We talked about this, its almost in the tools convention category but it impacts the whole stack.

AR: The process assumes that it will be implemented in engines, that doesnt apply here

JS: I would want to put this in wabt, I’m pretty sure it will work

AR: There is an implementation of this in the reference interpreter, should not be too hard to do elsewhere

BN: We dont expect more than one toolchain to implement the same

LW: We havent shipped it in the spidermonkey text parser (unclear whether it should be or if sarcasm)

AR: Theres no spec text yet

BN: Maybe we need a PR on the phases, if a feature is salient to the toolchains it requires two of the toolchains?

JF: For phase 2 you need full english spec text. You’re at 1

AR: Oh, if I am at phase 1 then I dont need it to be moved to a new phase. I will go do the spec text and come back with that.

Mutable Globals

BS: Basically, WebAssembly.Global added as a new JS type, the current spec text says that there’s a descriptor, one of the properties is Value, if you dont provide a value then it is undefined (we do a Get against the descriptor). Because we need a webassembly value, so undefined turnes into NaN when your type is a float. I was just doing an implementation of this and thought the value should be +0 instead.

JF: Someone made a comment that NaN is appropriate because it propogates

BS: NaN is appropriate because thats what wasm should do, maybe change it if the property doesnt exist (do a has instead of a get)

AR: Treat it like default parameter. Tomorrow I was going to propose not having the value in this descriptor and making it a separate parameter, that makes dealing with the default better (defaults to zero)

BS: I like that, how hard is it to do

AR: Its already JS syntax / does the library use it very often?

MM: I cant think of a reason we cant do it

AR: There are functions with optional arguments?

MM: Theyre defined not by default values (?? i missed something here)

AR: But prose has equivalent effect

JF: The question is what should the default be - do we want it to be NaN or zero?

BS: Zero seems like a better idea, thats how it works for locals.

AR: Maybe it should just be the default value for every specific type, then it has to happen for the rest of types like refs as null

BS: Problem there is there will one day be non-nullable references

AR: (agreement)

Poll: Should we use the default value of the type as the default value of the global?

SF F N A SA
3 9 6 1 0

MH: I dont have a strong reason to be against it, it seems unnecessary. What if you passed undefined as the argument. Should that be coerced into zero? I can see use cases where you might deliberately pass undefined.

BS: I see what you mean, its hard to tell the difference between not passing something and passing undefined explicitly.

MH: Ok, so its not a strong objection.

Wednesday

Tail Calls for Wasm

Andreas presenting.

Slides

AR: I think we addressed the main concerns around Microsoft implementing this.

AR: Key point, you can do an unbounded sequence of tails calls without blowing the stack.

AR: NOTE: We agreed it's valid for an implementation to amortise the space cost of a finite number of non-tail calls.

AR: new instructions, return_call and return_call_indirect work like a combination of return and call, for execution, validation, syntax, etc.

JF: for MS, calls with more arguments was the problem, right?

MH: yes, but we can thunk out to grow the stack (amortized tail calls)

AR: open questions - types for either tail-callable or tail-calls. (tail callers / tail callees). Marking tail callers potentially saves a register.

LW: It seems like you need both.

BT: If you mark the function which is performing the tail call

JF: Seems like last time we talked about if the imports were allowed to do tail calls.

AR: If you're performing cross module tail calls there maybe unbounded cost.

Rodrigo: ..

BT: If you tail call across security domains then it shouldn't' be allowed.

LW: You might need a more expensive ABI for cross module calls.

BT: It would probably be something with the embedder that if you call cross domain its not a tail call.

BT: The notion of security domains is an embedder concern.

AR: The embedder would specify what cross domain calls do.

JF: I'd like use to try to specify it. I.e. it's in scope for this proposal. This proposal needs to encompass more than the spec.

AR: Are you saying cross module tail calls would be ok? I think there were arguments we wanted to disallow that.

JF: The core spec doesn't need to say cross module calls are forbidden. It could allow that it could.

BT: It comes down to making guarantees, we can only do this within a module.

MM: Can you explain the register saved?

MH: The register would be how much stack space you currently have available.

MM: Is this part of the type system?

AR: Drawbacks: The whole point is that you have completely incompatible calling conventions. With direct calls you can always do the right thing, but if you have indirect calls, it would be a nightmare unless you standardize on one convention in your compiler.

You have to anticipate all possible uses of every function. In practice implementations would just pick one.

It also doesn't say what you'd do with ABIs and libraries.

If you call a known function you can always introduce a wrapper. In the indirect case there's no way to construct it.

EXAMPLE - See slides

AR: You'd have to introduce a new operator for converting references between calling conventions. Would require allocating a closure

DG: For call_indirect, you know what you're calling at the call site.

LW: It's just for indirect call there's a problem. A callref to a function reference could call anything.

BT: Functions references will be typed too…

DG: What if can make subtyping work?

AR: You don't know statically what you're calling.

DG: Why doesn't subtyping work?

LW: We could generate little wrapper.

MM: Let me make sure I understand. No subtyping you mean there's no conversion.

AR: If we allow subtyping, then all callsites that access supertype have to make a case distinction. Alternatively, require an explicit conversion, but then you need a closure and buy into GC.

LW: C/C++ have this. I.e. 4 different calling conventions.

BT: We could have a hints section. It could be factored out into a separate section.

LW: Unless it fails it seems unless.

MM: All this seems like a lot to save a register.

BT: We're talking about upleveling a machine concern.

MM: If we avoid the complexity of putting this into the type system, do you have any concrete sense of what the cost is.

LW: You have to have more expensive ABIs.

MM: So at the moment.

BN: Discussion on costs.

LW: The concern…

BT: I think this is a very small win, we could add hints later if we show a win.

Rodrigo: We had a similar thing with .NET, and the compiler team tried to work around it, but it has some performance cost. But it only affects some producers.

MH: I'd be concerned LLVM isn't going to be willing to make changes for our engine.

Rodrigo: You don't have to make the regular case

BT: You could add this to your internal type checking…

You have one version that's the tail callable and one that not.

LW: Some universes will go all tail call some all not.

BT: Concretely in our implementation won't benefit right now.

LW: It's a well known ABI in C++ that the optimal thing won't allow tail calls.

BN: ...

LW: It's also not inter-module.

BT: Then

JF: Modules are thing the developer might not know about. Tooling might come in later and split things up. The

LW: I'm dubious on this hints section especially in the inter-module case.

BT: The idea of having a hints section that’s not binding..

WM: How is this actually used? Nobody has any trouble using a libc from a scheme implementation.

JF: Let’s avoid going into implementation details. AI from last meeting for Luke, Michael as they have strong opinions, to come back with something that’s easier to grawk: WebAssembly#113

AR: What you’re looking for is performance measurements really

LW: Do we make all calls on some engines slower? We need a number and a threshold, it’s not tail calls, it’s normal calls

BT: We have to show an actual wasm implementation

BN: Why?

BT: We’re talking about measuring small number, comes down to measuring the numbers on the implementation

MH: You’re talking about making all calls 3% slower

LW: If you implemented it, with worst case measurements - with request for features for what makes it faster, it might be something concrete

MH: Can try..

BT: We want data to clarify the discussion

MM: We should really guard adding complexity to the type system. I think we should go forward not adding it.

AR: We'd also have to design this reference conversion instruction as well.

MM: Let’s assume we’re going forward without complexity to type system

DG: You’re adding a feature, that slows down calls that are not using the feature

MM: Don’t want a semantic difference that we have to live with, unless it’s necessary

MH: I do have an alternative thought. What if we required the caller and the callee signature have to match.

AR: There's no relationship between the arguments of the caller and callee.

MH: You can do mutual recursion + tail recursion.

AR: Mutual recursion would require concatenating the parameter lists of all functions in the recursion group?

BN: Develop an ABI inside the function

AR: Doesn’t work for indirect calls

WM: It seems like languages that use tail call use very different calling conventions than languages that do not.

JF: I think we really need to wrap up this discussion. I don't think we're going anywhere now. Seems like Brad is concern

BT: Cost is not predictable, even inlining can affect the performance

MM: Being agnostic to implementation cost, there’s enough uncertainty of implementation cost, there’s an unquestionable cost of complexity of type system

JF: Let’s gather data to resolve

BN: Imagine there was a module flag that limits to MH’s proposal, that way you have an AOT hint, FLag for only doing natural tail calls, and fails validation if not

MH: That would be a fine place to start

AR: please no modes. Not clear how that works for imports/exports either

MM: If we proceed before we have data, proceed in a way that minimizes type system complexity

AR: What does proceed mean? We need implementation feedback anyway

JF: We want to see where the dead bodies lie

AR: Only way to get real data is to have an implementation

BT: Our implementation wouldn’t incur extra cost

LW: We like the windows ABI, we’d be interested to see what MH’s results will be

BN: I’d like to move this forward, would the flag that we discussed upset you?

AR: Yes

BN, MH: Why?

AR, BT: I want to add something semantically useful, modes are a bad idea

MM: Isn’t it still the case that it would be most useful to get data first? Before adding any kind of validation

BN: There’s an ambiguity on the threshold - what happens when MH comes back with numbers? We should establish a threshold before we get someone to do the work. I still need an articulation of why we can’t start with a limited MVP

BT: I don’t think we can agree on a threshold because it’s a multi-dimensional space

Discuss poll phrasing

POLL: We want to consider proceeding in a different direction than what Andreas has proposed, based on data that Michael will collect.

SF F N A SA
6 4 12 3 0

BT: I’d like to clarify I’m not against data.

AI: JF to update previous issue

JS type API

AR: I expect this to be less controversial.

This is about extending our API to give information to type information.

Slides

Types carry information about sizes of tables and memories.

Desire to query some of that.

MM: What do you mean by form?

AR: What sort of import/export.

Systematic representation of wasm types as JS Objects.

See slides.

Key point, allow constructors to accept types.

Add WebAssembly.Function

See slides. Wasm Types as JSON

MM: There's a lot of anticipated evolution to the wasm API, does this accommodate that?

AR: One thing we have to note is that some of these things will be extended, we can add information to types (see slides for types), there could be additional fields to types as long as you have reasonable defaults

MM: When representing a recursive types, as JSON is always a tree, this would be an infinite rational tree.

AR: JSON is just an approximation here, we don’t have recursion right now so it doesn’t come up, but interesting question

Slide Wasm Types vs JS API

Current API descriptors nearly match this.

Add FunctionType + ExternType.

Slide API Extensions

JF: Do ES6 modules have any similar form of reflection?

AR: There’s no class of ES modules, it’s unrelated

BN: I think it’s all untyped

JF: Can you reflect on what an instance contains?

AR: You already can in current API, extension of import/export descriptors by this proposal adds more information

MM: Since Javascript is not statically typed, this API requires me to do two levels of things, figure out the right object and then get out the type object.

Why not have a single type function.

BN: Are you proposing WebAssembly.Type? / getType

AR: Then you'd have to return an ExternType.

MM: Withdrawing suggestion

AR: Imports and exports are similar.

They return descriptors. Import and export would now return full type information.

AR: We want to separate type from value for new Global.

Slide: Example

JF: I think what we're currently missing is table and memory size.

Someone on stack overflow is asking about it.

BT: It was you asking ;-)

Slide: WebAssembly.Function

We have a fourth kind of export: functions.

This would introduce a subclass of JS's regular functions. It’s the class of Wasm exported functions. Constructor can turn JS functions into Wasm functions by providing type

Closes a gap that you can't from JS put JS function into a wasm table.

JF: How does this work for i64?

AR&LW: It's just equiv to creating a little module importing the JS function.

JF: What if you import and export something with an i64?

AR: You'd get a function that would throw on invocation.

AR: With anyref you'd now be able to pass in WebAssembly.Function

[Example]

MM: So the type signature is suppose to represent a varyadic function?

AR: It’s a function of one i32, the JS function that implements it can be given any type

BN: Does that have the side effect that you can introduce new signatures into a module?

AR: That’s already the case

Slide: Wrinkle: Naming of limits

AR: Current naming only makes sense in constructor. We should rename!

We probably just need to allow constructors to accept both.

LW: Funny that minimum changes over time

MM: I've heard this described as monotonic type state.

LW: What's the use case for asking the type of the memory?

Knowing the maximum can be useful.

Is anyone using the type to call it off the constructor?

AR: If you think about another use case, where you implement an actual linker.

LW: That might even be a case where asking what was the initial length, and also the current.

AR: You want something that matches how the link-time type checking works internally.

JF: We’ve had this discussion, trivial to store initial length

LW: It's not the size that's checked, that make sense.

AR: It would be another sort of information that's not about the type.

I don't know what you'd use it for other than debugging.

AR: I proposed in the type structure we'd rename initial to minimum but continue to accept the old name.

BN: I’ve coule of general questions - is there a strong reason not to separate WebAssembly.Function into its own proposal?

AR: I think it is simpler to lump it in here, because there’s no place to put the .type if you don’t have this.

BN: Second, Strong case for cleaning up some of the things going in - adding a type value is doubling our JS API surface - could we build it into a library?

AR: This is future-proof, if you are given a module that you don’t understand you can reflect on it.

Did implement such a library for Dfinity

BN: Are you leveraging some existing parser?

MB: The parser is 400 lines of code, parsing wasm module to JSON.., generate functions that go on to tables from types exported by other modules.. Let’s take it offline

BN: Curious, why was this the situation that you didn’t statically know the information?

MB : Because it’s not known ahead of time, multiple unknowns

AR: It’s the use case of a loader

TS: tc39 is usually very conservative about adding built-ins, this would almost certainly clear that bar.

AR: The main API surface here, is specifying all the property attributes, we have to specify whether it’s fresh object each time etc. That’s JS

MM: Introducing a data language that reflects the type system. Whether it allows us to grow the type system in the direction we want. You’ve talked about introducing opaque data type... abstract data types... so you don’t see the contents of the type. Is there a natural representation of this in your language?

AR: We could use a symbol, that’s a generative name

MM: Type system is structural, what if we want nominal.

AR: It’s the same question I think

MM: We don’t have recursive types right now, when someone takes this descriptor and call JSON.stringify on it, they’ll get a JSON string. If in the future this returns a recursive structure. JSON is spec’d to throw here, so there’s an upward compatibility gotcha here.

MM: Question for Till, w/ value types in JS, is there anything here that is connected?

TS :I don’t think there’s anything here with intention, was going to ask if we can represent this as type objects?

AR: I assume typed-objects will take two years, and this I want tomorrow :-)

MM: Good to anticipate problems AOT

AR: Regarding recursion, it is not going to be here, it will be in the type section. The types here will just refer to a type in the type section. When you have a reference to a function type or structure type it will refer to a name.

MM: Some kind of name spacing...

AR: Reflect the type section separately, so current types won’t become cyclic.

JF: We can figure that out well after stage 1. Early enough that this is theoretical, let’s establish direction and if it establishes requirements for stage 1.

POLL: move JS type API to Stage 1.

SF F N A SA
19 7 1 0 0

AR: This is purely JS API, whoever would be interested in collaborating on the implementation end, talk to me.

BN: If no one else wants it, I’ll find someone

WebAssembly in blockchain

Slides

SH: block chain is a distributed network, peer-to-peer. Take open-source codebase, run it on your computer, become part of the network. Sends packet to network, comes to consensus on results. Atomic function calls on an existing state.

You can write code in rust, compile to webassembly, send bytecode to the network. When someone sends data to address, the function will be run. Each miner will run that code, and come to consensus on the result.

Everyone executes all state transitions on the transactions, when you make a function call, some have 10 instructions, some have many more. People have to pay based on resource usage. Metering is paying according to resource usage. Measured by instruction cost. Metering instruction is inserted at the top of the block.

[interruption from VC]

When someone makes a function call, they include some payment amount. As you execute you take payment, if you run out, you throw "out of gas"

MB: The whole point of metering and blockchains is to solve ... since we’re doing symmetric computations, al nodes need to come up w/ the same results. We want to make sure all programs always complete. We want to force decidability on all programs. Metering hopes to accomplish this.

Metering also helps incentivize, we can say it will cost this much to run the program.

SH: Non-determinism. Since all nodes have to come to agreement non-determinism is important. Some issues are resource limits (stack depth). Someone from ethereum brought up that wasm doesn’t have resource limits for loops (you can’t tell the difference between first and nth time through the loop).

Metering helps here.

AR: want to differentiate operand stacks and call stacks... operand stacks don’t have a physical materialization. I see no issue with loops

You can trivially build a recursive function that can blow the call stack, it may be different on different machines. So we have to be careful with the stack depth, so we always have the same number of iterations for recursive functions.

SH: Memory is also a concern for non-determinism. Floating point as well, including the sign of NaNs. A simple way projects are dealing with this, during the validation phase, look to see if there are any float operations or globals and reject the module.

AR: We are planning to inject normalization code for NaNs for Dfinity. You pay some extra cost for this. It might help if that were a concept within wasm to specify that there is a deterministic variant of wasm that can help here. I’m not suggesting a mode, but more a language variant. A sub-langage were everything is deterministic.

MB: The other place with non-determinism is the SIMD proposal, would you just leave out that?

AR: Take all the non deterministic instructions, and fix their behaviours

LW: Would you just pick limits?

AR: Resource limits are handled by gas.

LW: Are they handled by gas, what about blowing the native stack?

AR: If you pick your cost model carefully, the cost depends on the call stack

LW: The deeper the stack, the more expensive the call?

MB: In an ideal world, you can meter by how big your call frame is. If you push a call frame with a lot of locals, it should cost more.

AR: The usual fuel model cannot distinguish between sequential calls and nested calls

MB: You could ... every time you enter a function, estimate how big it will be.

AR: Yes but that’s a separate problem. Can’t detect nesting easily

MH: Why the resource limits? Why non-determinism there? Is it a problem if they run out too early?

AR: You need consensus. Even if they run out of resources, they should all be deterministic

RK: How do you get deterministic floating point.

DG: We have totally deterministic floating point [SIMD notwithstanding], IEEE754 gives us the standards to do that, we have tests that test this

MM: In that case, wasm should adopt that deterministic answer...

DG: Except for NaNs, historically there are differences across hardware implementations, it's debatable whether we did the right thing here, but we don’t canonicalize NaNs

MM: There’s no known cost-free way to canonicalize across hardware?

DG: no. [though individual platforms sometimes do have such features, such as the DN flag ("default NaN") on ARM]

MM: gas being deterministic is not necessary ... just requires all quorum that succeeds agrees. You don’t have to be so careful about deterministic cost model.

AR: Probably true. It’s still convenient if you don’t have to..

SH: instrumentation: getting information outside the wasm runtime into the wasm environment. We need to know the contents of the stack. Truebit takes snapshots of the wasm state, and the stack is part of that. Different ways to implement that, an explicit stack along side the implicit stack. I understand this is something you’d need for garbage collection.

JF: You want to be able to take a safe snapshot?

SH: Yes, I think we can take care of that.

AR: That’s the hard part, the shadow stack ... I don’t know why you need to recreate a stack?

SH: Want to snapshot execution at individual instruction. ... if you have an idea of the snapshot of the wasm state - linear memory, stack, global, all the mutable state..

MM: I suggest that for blockchain usage, the natural snapshotting persistent model doesn’t require the stack... the blockchain gives you a agreed order of messages, if you process in order of completion... ethereum says a if you run out of gas, you consider that an aborted message ... you only need to look at the successful ones.

AR: Yes, but I think he just explained why they want more than that.

SH: In the general case when you are using transactions as the atomic bits of change, it makes sense - just pointing out the different implications that came up while talking to different people

AR: that seems very difficult to achieve, depending on how the VM is implemented. Do you have your own VM (SH: yes), oh then you can do whatever.

RK: This could be interesting for (???) tools

SH: There are solutions to these, different companies have different ways to solve this, just wanted to bring this up to the community

BN: This is interesting for crash reporting... it may be lossy.

AR: The general observation is that if you need to accurately inspect the stack, it is a huge burden on implementations and rules out a number of essential optimizations.

SH: imports and host functions. The wasm VM needs to take info from the blockchain (block hash, hash function, time stamp, etc.) Some have mentioned that the spec is focused on the JS environment. Having a non-web platform independent spec would be helpful here.

MB: It might be helpful to have a blockchain embedding spec, there will be common things - block hash, block numbers etc. that would all be common.

JF: It’s not how you expose the import but the list of imports you expose?

AR: You want to define the wasm-facing API like the OS for executing on the chain. It’s the opposite direction from JS where you just take the existing JS. We already have worked careful with separating JS from core. It would be OK to have another embedding spec. Is there enough commonality to have a common blockchain spec?

MB: I think there is one, but it might be too small to satisfy.

SH: Next point is traps. If you have typed traps you could build static analysis tools. Would be useful in places where you need security. Most things are bucketed into runtime error, but it would be good to have more descriptive error messages.

MM: Are you anticipating that a trap as opposed to an exception rewinds everything to the last known state?

SH: Not sure.

JF: maybe not relevant to CG.

SH: With many decisions if they live in the outer layer, then the projects can change and adapt as we learn, rather than putting in the spec.

SH: Backward compatibility is something comes up. There’s already a compiler from the existing machine to wasm. There’s also JULIA that helps here.

MB: evm2wasm, I wouldn’t trust it. It works 90% of the time, but it’s not ... when we talk about introducing new features, wasm replaces evm, there won’t be 100% backward compat.

SH: Parts of that are existing codebases and contracts being ported to wasm easily. Also the existing infrastructure. Only matters for ethereum.

SH: Some things that are helpful: multi value, reference types, annotations. This helps the wasm fragment talk to the external world.

AR: I’d add GC types too. (MM:yep) since we’re talking about message passing model -- anything with the linear memory is useless since you can’t pull it out and pass to another model, you’d have to serialize/deserialize. If we had GC types then we would just pass around references, which would simplify thing.

MM: the simple summary statement you can make, is that existing wasm with ref types is 70% of what we need.. Wasm gc is a full on language capability machine

AR: At dfinity we define an object as part of the API, you can emulate these managed types, but it’s not fully typed, many runtime checks. (LW: dynamically typed!) we can emulate that already, so we are 99% there, but it’s not particularly nice.

MM: The issue is: what is the unit of mutual protection if you assume the adversary is in wasm bytecode, if they’re running in the same compartment, you only get protection at the compartment level... if you don’t have linear memory then you’re great.

AR: I’m talking about ref types, so no linear memory, it’s just host function calls to create or access references

MH: Are you talking about running different wasm instances in the same address space? (MB: what do you mean by address space) two different instances of wasm code running in the same address space.

MB: Yes

MM Spectre is an attack on confidentiality, and secrecy, not integrity. On blockchains everything is publicly accessible, but defensive integrity, wasm already provides full defensibility using separate linear memories

BT: You don’t have any timers in their computation model? If so then spectre is off the table.

MM: You can only read the things that the quorum reproduces.

AR: You have a replication factor of like 400

MM: If all of your miners have the same secret, you can spy on that "secret". You can have a timing service that enters a time value into the blockchain, there is no threat w/ the time that can be used.

JF: Let’s time the presentation

MB: If you’re trying to run things in parallel, you can use gas to schedule processes and trust that it’s deterministic.

SH: threads and GC are maybe problematic.

AR: GC only an issue if we have weak references

BN: how much do big performance differences matter. Are implementation choices for performance relevant?

SH: Matters a lot

AR: doesn’t matter at all for us, networking and cryptography dominate most cost.

MB: we have insane replicating factors

AR: You have to wait for the slowest guy.

???: security is the most important thing.

BN: For crypto calculations, performance matters. Are the contracts ever on the critical path?

AR: good comment, adding some nuance - depends on how we use wasm. One of the ideas for Dfinity was instead of having monolithic crypto instructions in the VM, we could implement them in wasm. in that case we would need performance, but in the case of smart contracts, not as much

VC: when you want to use opcode count or gas to measure ... if you measure poorly it can be a problem.

Ethereum was assuming that you could not optimize it fast, if you have a JIT that can change.

JF: How can we have CG discussions can we take your concerns into account, details here are less helpful. How do we stay on top of non-web requirements. It’s a learning experience for us, we want to know what topics you’ll be most interested in. What kind of things do you think you will champion, things that the web might not care about.

AR: the biggest requirement for us is determinism, that might lead us to a subset of wasm that is easy to spec.

BN: Should we do that in our spec?

[discussion about determinism and threads]

AR: Wrt threads, also compartmentalization

AR: In terms of actual features, I’m not aware of much else that’s needed.

BN: In practise is a lot of the hardware from Intel? Any thoughts on how to make those decisions wisely?

AR: i don’t know. One meta-point in terms of process. Everything so far has been driven by the web platform. The web embedding has a privileged place in our process. You’re also supposed to provide the JS spec for core features. It’s not scalable for N embeddings. We may need to decouple these processes.

JF: We have a lot to think about. Maybe return to this at the next meeting.

[10 min break]

JS/Web API tests

DE: There are incomplete tests for JS and web. My coworker has taken this on, will write a full test suite. He has been reviewing the spec, and filing issues.

Where do we write this test suite? From previous discussions it sounds like web tests should got in WPT. On GH, someone tried to put the test in the spec repo, but there’s not much reason to put it there -- can’t run without browser.

Where do we write the JS api tests -- we have one file, checked into spec repo. Each browser has a manual process to pull tests down periodically, get them running locally. They can run in just a JS shell. That’s how the browsers implement it, which is good. Easier to debug as well.

As we have more tests, we should preserve this property. Should we check in the test w/ the spec, or should we check in the test at other times. With wasm and JS and other web platform parts, when there are barriers to upstreaming tests, then devs write tests in one particular browser. The JS tests are incomplete, but each browser has their own complete tests. There are benefits to having common tests. We (igalia and mozilla) will write tests in the way that’s most appropriate. I think we should use 2-way sync, w/ review process. There has been concerns about the tests being clean enough. The other option is checking in tests w/ specification. Another way is like test262, where the test doesn’t go in the same commit but is cross-referenced, then you mark the test w/ a feature flag.

I think Luke and I were thinking that we should try to do 2-way sync, w/ WPT, but make sure that we can run in JS shells. Make sure 2-way works well so after the partnership between igalia and mozilla, anybody can use it for new features and write tests using the same process.

AR: I have a meta-question, we had this discussion quite painfully previously in the past, why are we having this discussion again? What changed?

DE: We’re about to write a lot of tests, and ultimately since the last conversation, I’ve looked at the tests, and coverage - part of the thing might be that the wasm community is more disciplined - we should write the tests as we write the tests. W.r.t the js api, we haven’t had tests from the spec writers, we should have tests, not talking about wasts, js api tests would make sense to have as WPT and 2 way sync, running on different browsers gives different results, we should have compatibility as we’ve shipped in 4 browsers

LW: Also we’re talking about JS embedded tests, not wast.

JF: one of the issues in the past was where tests live, in the spec repo, WPT, individual locations w/ two-way sync.

BT: We took a poll on this and we agreed on 1-way sync, it was 6 neutral 13 favor, I thought that was non-controversial, 2-way sync was controversial. The step where we add this didn’t happen.

LW: We’ve talked to real people that work on WPT, had more discussions about coverage, things that don’t have to sync - PR submitted to the spec..

JF: The discussion we had: what’s the upside to checking in tests into a repo that’s not a wasm repo.

DE: We’ve seen this much before. Implementers don’t write tests where there’s a big barrier to entry. There can be manual pulls, but it would be nicer to have automatic pulls.

JF: We haven’t written JS tests in the spec repo, we have implementation divergence. Whether we want to fix it by putting tests in the WPT vs. the spec repo is the issue. We voted on putting the tests in the spec repo.

LW: That was discussed without much information, and for now some time ago

BT: Nobody did anything, now we’re talking about doing something and they want to do something different.

DE: Nobody stepped up to do this, so it’s kind of a sign that we didn’t make the right choice. It’s not good that we need a browser-specific harness.

LW: When new tests are written, no one needs to maintain a list of known failures etc,.

BT: That doesn’t happen for the wast tests. We’re relitigating a long discussion.

LW: It wasn’t a solid decision.

DE: We’re trying to solve the problem of friction of landing tests.

BT: Now we’re making this contingent on a new mechanism.

BT: We should have landed tests in the spec repo - we didn’t do it till now

LW: It would be more work for me to do this. (JF: 1-way sync would do this) but it wouldn’t, it’s more work for us.

JF: It doesn’t matter for webkit, we don’t do 2-way sync. We agree that tests need to be written, whether they go in wpt or spec repo, they still need to be written. Just because we don’t have tests ...

DE: The tests have been written, 4 times, just not upstreamed. WPT is meant to make it easy to upstream. There’s a culture difference between pretty vs. ugly tests, specific reviewers vs other browser reviewers. 2-way sync is the most liberal policy.

JF: This applies to either repo, the last time I checked, there were IDB tests that worked only on chrome - not that it means WPT isn’t working..

BN: There are some of them that fell into blink, some got removed.

JF: We have some existence proof that wpt doesn’t necessarily work here.

BN: For Chrome, since v8 is in another repo that isn’t a big win.

LW: If it needs to run in a browser, you need to put it there. It’s where you need to run it ultimately.

DE: We discussed the Web tests, there’s no contention in those living in WPT, we’re talking just about the JS API

LW: So are we just talking about JS API tests? The interesting thing about those is that there is different behavior between JS and web.

TS: Some interesting data that could change things -- different test harnesses that moved to 2way sync and how that has affected their work.

AR: Tests living in spec repo is crucial for wasm proposal process. Moving to WPT breaks it

BT: We discussed this to death, we came up with a solution everyone is fine with. Now we’re trying to bring it up again. We’re delaying this on something new...

LW: We’re not delaying things, we’re writing the tests, but we should write them in a way that’s simple, and easy to use

BT: Why do we need 2-way sync?

BN: because the fundamental philosophy is that it’s easier to put them in a shared place.

JF: I’m worried that we voted to put this in the spec repo, and we didn’t follow up on that.

BN: In hinged on novel infrastructure.

BT: In shouldn’t hinge on any infrastructure. Tests are important, we agreed spec repo is the source of truth. 1-way sync is what we agreed to.

LW: They will eventually get to the spec, we would like to land them in WPT

AR: We decided to do it incrementally, start w/ a 1-way sync.

LW: We tried to do this before, it was very painful.

JF: We agreed to 1-way sync.

LW: It’s painful. We’re about to start writing them. It would be easier to land in mozilla central.

BT: It’s a GH repo.

DE: The actual developer who will be writing the tests prefer the tests to live in WPT

JF: You can check in stuff to mozilla central all you want.

BT: How do you deal with wasts?

LW: Manually, it’s annoying.

BN: standard policy about WPT - there’s no structure, just find a buffy and land the tests

JF: One of the reason we supported the spec repo as the source of truth, it has credibility.

LW: Still can be, the idea is that it gets there eventually.

BN: Would it be a tragedy that DE landed tests in WPT?

JF: It’s not harmful, we had a clear vote, we wanted to work in the spec repo, but that didn’t happen. I want to take an honest stab at that.

DE: I wrote a couple of tests, it was clear that we had a disagreement within the CG - that was the most we could get to at the time

JF: The vote is clear.

BT: There was no impact.

DE: We did check in tests at the time

JF: I have the poll in front of me, we voted for 1-way sync. BN said he would look into it.

BN: Reporting back - we don’t have resources to put into it right now

JF: The decision was to start w/ the spec tests.

DE: My hope was to start test development, but there was continual spec work. We’re in a good state to move forward - given what Brad is saying about writing tests, and then bringing them into the spec repository - this can be now following up

BT: the reason we wanted to have 1-way sync ... it solves automation problems.

DE: Lot of developer friction to land tests in the spec repo first

BN: Let me clarify. When we talked about this before, was 2-way sync to v8. Now we’re talking about WPT, where it will break in Chrome not v8.

BT: ASAIR, what is the source of truth? We wanted the spec repo to be the source of truth - understand the argument with developer friction...

AR: I’m arguing that this is frustrating for when I’m writing a proposal to land in a different PR.

DE: Test262 lives outside JS - that’s not been a source of friction

AR: How is it difficult to make a PR to the spec repo.

EH: We currently have web api test PR that is in flight, can we move forward with this.

LW: We can treat the spec repo as source of truth, and then copy them to wpt, or vice versa..

AR: Before that there’s no review?

LW: Yep, just a speculative test.

JF: I don’t like that we had a clear vote, the action items slipped, you’re ready to try now, I want to have an honest try now.

BN: Dan is where we have resources, he’s not in the place to add infrastructure

Clarifying what two way sync means

DE: I don’t think anyone was proposing we’d do 2-way sync between WPT and spec repo. We could also have the tests checked into WPT, then manual uploading to spec test?

BT: This is where people felt strongly, that the spec repository is the source of truth - i thought you meant two way sync to the spec repo

BN: we’re talking about the existing 2-way sync in place, from WPT to chromium and firefox.

BT: Confused, sorry - I feel strongly that the spec repo should be the source of truth for tests

LW: I was assuming the same, I like the idea that everything goes upstream to spec repo eventually. It would treat the spec repo as another browser.

DE: We did discuss that, I’m sorry. I don’t think that’s a necessary component. I want to make sure that the friction of developers to upstream tests is low. We have empirical evidence that this changes the # of tests that get uploaded.

Everyone is confused about what two way sync means

BT: Understand argument for friction, it’s an incremental tests - ball is in your court with tests, landing them in the spec repo in one chunk makes sense to me, the one way sync was the incremental change

BN: When you land, Dan, you’ll want to test against a browser. Is the concern that you’ll not get to use existing infrastructure if you have to use the spec repo.

DE: Annoying to switch repositories during development, easy to make mistakes - not intuitive to write tests in the current form

BT: That’s an argument to have a single source of truth.

DE: Different browsers have different ways to pull in Web API tests, how the wasts are run, distinct from WPT, adds more complexity.

BT: js-api tests are pulled in the same way the wast tests are run

DE: They’re run by a mjsunit test

BT: Run differently, but sync’d the same?

DE: Not sure.

BN: Are we exclusively talking about js-api tests or web tests?

LW: All should go up to the spec repo, js wast, web.

DE: I don’t think we should have web api tests in the spec repo, because there’s no practical way to test them - WPT already has the infrastructure to run tests in multiple browsers without additional work

TS: Will web platform tests be able to run ES module tests without issues.

DE: ES module test are different, test262 has special shell to test the JS side, not web integration side. Some of the engines correspond to the shell.

TS: For wasm we’ll need ES module tests, they shouldn’t use a different harness.

DE: They should be the same as the JS API tests.

JF: We’re not coming close to resolving this. Sounds like mozilla and dan want to go in one direction, Brad doesn’t have infrastructure to set this up, I don’t like that we had a vote and didn’t follow the procedure. Sounds like you want to do something else, but there hasn’t been action yet, I’d be OK to go with your direction because it seems like there will be action. OK to take a vote on this.

BT: There has to be a point where the syncing happens.

JF: Both solutions, wpt -> spec or spec -> wpt requires infrastructure that isn’t there.

DE: Fine with your plan, write in WPT - then clean up, and land in the spec repo as source of truth

AR: I fear that this will become de facto source of truth.

DS: we used the term sync to refer to WPT -> spec, w/ WPT as staging ground, with review steps. That’s not really a sync, it’s more manual.

JF: It’s automated - with imaginary infrastructure that google will write, that we can check which tests pass/fail - pull request creation only happens when some number of browsers pass the test

Discussion about test infra

AR: This is far more complicated.

JF: I don’t care about how much more complicated this is, I want to move forward. The main reason we had the vote was because we wanted the spec repo to be the source of truth. I sympathize w/ mozilla and igalia because we don’t have any other process.

BN: I thought at the time, i could go back to other folks to put infrastructure in progress - was looking for dispensation to change handling of tests - it was just an attempt at moving forward - contingent on work that happened, just clarifying rationale

JF: That doesn’t help us unblock

BN: There’s no infrastruture, Dan will land in WPT and manually copy to spec.

JF: We’re still relying on manual, or infra doing it

BT: What do we want to achieve as the wasm CG. I think we want to have the spec repo as source of truth.

BN: Is that more important than having tests?

DE: Not convinced that the source of truth discussion is relevant. Source of truth is always going to be fuzzy - it depends on organization of test repositories - we have to handle it on a case by case basis

BN: if he writes a test that isn’t green in all browsers, he gets it for free.

DE: buggy tests get checked in, even in things like test262. Having WPT as staging ground is useful.

BT: Point of source of truth is that we have to have tests that pass in one place

BN: We don’t have bots in the spec repo.

BT: Do we agree that the spec repo is the source of truth or not? We should put all the tests that should comply in the source of truth, whether implementations comply or not. You’re talking about putting the source of truth somewhere else.

JF: You are making a good distinction between source of truth vs. source of green. There are no JS tests right now, Dan’s mandate is to catch us up to the place where we can see the discrepancies, find what is red, then try to resolve in the spec repo. He wants to find out where the failures are. Once we resolve the source of conflicts, then what we care about is the source of truth.

DE: I’m confused about "source of green", there are things that are different between browsers. You never check in tests that don’t match the spec, checking in tests helps in alignment. What are the disagreements that force us to differentiate between source of truth and not.

BT: I agree, which is why source of truth should contain red tests as well - we’re not going to write..

LC: I want to say that the conversation is happening between 5 people, maybe take that off line.

JF: Let’s try to resolve offline.

[10 minute break]

ESM integration

Slides—with speaker notes

Slides—without speaker notes

issue w/ webassembly.memory import

LC: Not sure what happens to undefined values in module graph

AR: They get converted (LW: which throws for memory)

[slides continue]

issue w/ wasm to wasm cyclic import

JF: This is a TDZ issue, right?

LC: yes

JF: I thought we were disallowing direct wasm -> wasm, this example has wasm as part of cycle.

LC: Yes, I should update the picture.

JF: It should be ok if you marshall, unwrap and wrap.

LC: Should be ok if you call function later.

AR: In general is it correct that it’s fine if there are cycles, but problem rather if you use the values.

LC: yes

LC: If the wasm modules are only importing things from the JS modules, should be ok.

MM: for normal functions, the variable is still assigned, in case of an export function, am I exporting the function or the cell that’s holding the cell

LC: You’re exporting the reference, but you can change that. In JS this works, but not in wasm.

TS: One difficulty, is that in wasm you can’t change the import to a different signature or a different type.

MM: I agree, but is inconsistent between the two.

BN: Where wasm module exports something wrapped in host binding, that also cannot participate in cycle?

LC: Like a global? That would still not be able to participate, I think. Not sure. The issue is more about when things are initialized than about whether you’re pointing to the value or inner value.

MM: Let’s say that we expanded the ref types proposal - anything that’s imported/exported could be passed dynamically as a value - anything that you might have to hook up statically - you could hook up dynamically by back patching - does that make sense?

BT: You could use the global as a cell

AR: If everything is first class.

AR: You can already do that with reference types, because anyref can e.g. be a memory object, you just can’t use it as such in wasm. (MM: OK)

JF: What are the next steps? Steady progress on ... no objections to anything in the last 3 meetings.

LC: These tricky issues, I want to make sure that everybody’s on board for this design, then we can work on spec text. It will be complicated to work across each spec.

BN: If you have awasm memory, exporting from a JS module, is there an opportunity for that to be shared cross worker, before...

JF: Like instantiating a module w/ threads?

LC: You can’t actually share modules across workers

BN: Is there an opportune moment, if a module has a shared memory relationship w/ workers, ...

LC: That would be after startup, not sure..

JF: At that point you want 1st class threads. Trying to make that work is not as useful.

BN: Far more useful, yes.

JF: There’s a gap, I just think we don’t want to fix it yet.

TS: Maybe modules get top-level await, supsend top level, do dynamic imports. That could be a place to do thread stuff. Without it would be hard.

JF: One thing I thought of. If we have dedicated worklets, you could have a cycle of worklets, one of them is wasm, forking then is just forking the worklet which is like adding a cycle. Does that make sense?

AR: We don’t want to overload the module mechanism.

BN: The consideration is that: we shouldn’t plan a way to do it, but somebody might try to make this happen.

JF: The start function, eval get to run code - you don’t get to do anything before that

AR: Is it still the case that the one change we have to make to wasm spec is to split instantiation into two parts.

LC: We will need to add a parsed module method - we need to return a module recodr interface - at least enough for the html spec to use it.. It needs to be able to call instantiate on that module record

BT: That’s roughly the imports of a WebAssembly.module.

LC: It’s not quite- we just need an interface - three methods and the property - parse module..

AR: What are they again?

LC: ParseModule, ...

LW: Andreas is talking about the core spec. ParseModule could be implemented with DecodeModule and additional JS goop.

MM: Module method was described as containing a url - want to verify that it’s upto the loader to verify that the name is interpreted correctly, and the module record doesn’t contain some resolution that contains that resolution of inputs to that module record

LC: Yes, the module specifier depends on the host. The module record has the requested module field, the modules that that module depends on, and the AST and other stuff...

MM: It doesn’t contain any information that is host dependent

LC: No, I don’t think...

JF: It contains the source, right?

??: It only has the specifier

LC: The specifier doesn’t have to be a url. I just used that as a shorthand for specifier. The url just what the html spec needs

MM: The module record you’re proposing will follow those conventions.

LC: We just need what the html spec uses.

MM: I want it to be independent of html.

LC: There are currently no other specs for how to load modules, I’m working with node modules WG to keep them moving along w/ us, so we can make sure that node use-cases and others...

MM: The other effort that we need to coordinate, the realms effort- which is trying to enable user code to specify custom loaders

LC: That interfaces w/ things outside the domain of this WG... (MM: I agree)

LC: coordinates with other WhatWg loader spec

MM, LC to coordinate offline.

LC: Two weeks to review, to vet as a design before writing spec, especially across three different specs

BN: Is there an unusual hazard here?

LC: I’m just worried that working across 3 standards bodies is complicated, so we don’t change our mind midway.

JF: Maybe we should do a joint meeting across specs - if that’s helpful at al

SIMD

Presentation by Richard Winterton from Intel

RW: Investigating what types of instructions are being used with the internal benchmarks at Intel

Noticed that there is multiply i8x16 instruction, doesn’t make sense to him because of overflow

Was implementing an instruction in Tensorflow and using 8-bit to 16-bit zero extension, and this approach avoids overflow. Decided that this makes sense for WebAssembly.

Proposed to use 8x8 zero-extended loads, which results in 6 new instructions that take 8 contiguous byte values in memory and sign or zero extends them to 16 bits (i16x8). Both Intel and ARM have these instructions.

VB: Also Arm has some additional instructions to widen d0 into q0 to double the size of the input.

RW: Without these instructions, essentially 50% of the multiplies are going to be overflows. This is important for machine learning and graphics. Peter looked at 8x8 multiplies, and someone did request this (stackoverflow question). Never found the application for 8x8 multiplies. In Rich’s opinion, recommends removing the 8x8 multiplies.

JF: Was this instruction because of a codec?

RW: Checked with James and James would prefer the new byte extension instructions

WV: For machine learning and graphics this represents some kind of normalized value. Do you want an 8-bit value that is shifted back down?

RW: Yes, you typically do want to pack it down?

JF: We do have basic shuffles.

RW: We have shifts and shuffles and extracts, but expensive.

BN: What about multiply by some 15 bit constant?

BN: imagining Mul-high but with constraints

RW: Not going to worry about mul-highs right now.

BN: James said mul-high was pretty pervasive.

RW: We don’t have that right now, but with these new instructions, multiplies become useful.

JF: This is just 8-bit multiply?

RW: Yes, but the same is true for 16-bit, but these instructions are already valuable. 8-bit not valuable not right IMO (i8x16.mul should be removed). Recommend moving out and instead use these 6 instructions.

JF: The people who had proposed this aren’t here right now.

DG: I think this makes sense. As long as it makes sense for the arm folks. Not sure exactly how to do non-standard types for instructions. As long as it is not a regression for them.

RW, MH: Just the naming convention needs to be fixed.

R: what make it a load and not an instruction that operates on memory.

RW: Doesn’t match any of the ISAs currently.

JF: Can use the regular load and multiplies.

R: Would still load 8 bytes.

RK: If there is no value in expanding in registers?

RW: You can do this with a shuffle.

RK: This is a regular load followed by a shuffle.

RW: With sign extension. Yes.

RW: Why do that when there is a single instruction on both architectures?

JF: It’s a matter of consistency. Just want to see what is the most consistent thing. Tried to avoid 16-bit things except when there are direct operations. we don’t store 16-bit things, but loads do sign/zero extensions.

JF: I agree that is trivial for a compiler. As an implementor, no problem having 64-bit loads.

PJ: If we have 64-bit loads, why not 64-bit stores.

VB: You have 64-bit loads for I64, but not for i8x8. When you do a 64-bit load, it’s into a normal register, not a SIMD register.

JF: But in WASM we have an infinite number of registers.

DG: I think it does matter if we call this a SIMD value or not. It’s an important hint to the compiler that something is a SIMD value. That’s why we have two different things, for register allocation.

JF: I wouldn’t outright reject what Rodrigo said. We are doing this for consistency. You have a good argument.

RW: Proposal?

RK: What do you favor, do you want a reduced amount of instructions, or consistency?

RK: Compiler can easily reduce a load followed by a shuffle to just a load.

RW: Does the compiler always know?

RK: Can never really know what the implementation will do.

RW: That’s the problem. For this we have to define the type of a load.

PJ: You could do the load-shuffle for the sign-extension.

RW: We have to know whether to sign extend or to zero extend.

JF: This is not a perf issue.

RW: Correct, this is not a perf instruction. The only way to implement this was to do the zero extends or sign extends. Otherwise one has to do 4-5 instructions just to get ready for a multiply.

RW: Could not find a current implementation of anything that would be happy with a i8x16 multiply.

JF: What you propose looks very consistent with other things we have.

RW: Yes, with one exception. We are showing up zero-extension and sign-extension.

MH: No yes.

RW: Oh yes.

MH: Would suggest this go to bikeshedding committee for naming.

DG: Agree with the 8-bit multiply isn’t that useful. Wants to research it though. Auto-vectorizing compilers might find it useful.

JF: Ok to table until Dan does research?

JF: Punted crash reporting to discussion.

BN: A more extended GC discussion may be useful.

Action Item: Dan Gohman to do research into the usefulness of 8-bit multiply.

Garbage collection

Presentation by Andreas Rossberg

Slides

AR: More of a high-level update. Discussion grounds for how to attack the problem. Progress since last time: split reference types into its own proposal. More incremental path forward. Want to agree on a scope for "GC MVP" and figure out strategy for JS interop, mentioned by Till. Possible roadmap:

  • Reference types

  • Typed function references, downcasts

  • Type import and export

  • Basic GC types

  • More complex GC types

  • Weak refs, finalizers

MM: Typed import and export gives us opaque names?

AR: Yes.

AR: Probably want to introduce it before GC types because it gets more complicated later.

BN: If the embedder provides some of the type information, may allow optimization.

AR: Not sure about optimization

AR: Probably need to introduce JS API for types.

AR: Weak refs and finalizers might also need some stages.

JF: Usually when we do MVP, we have a target for whom each thing is useful. What users and toolchains would use each of these? Not minimally viable if nothing uses it.

LW: Slide on targets of each thing?

AR: No.

AR: Been thinking about it more of natural technical layering, not about uses cases specifically for each layer.

AR: Function references is a natural extension, but also useful e.g. for Dfinity to have typed function references

JF: Would typed file handles be a use case?

BN: That gives the ability to inline bindings to the outside.

LW: Reason.ml is used in facebook messenger. Could be a good use case.

JF: Useful to know when/if we succeeded, and who we should be doing outreach to.

LW: Are trying to find initial languages.

JF: Was talking (a gc expert). Worried about roadmap for GC. Two year block and then nothing happens, and then something comes out or not. Want to be able to show roadmap, concrete. Success criteria.

AR: That’s why we want incremental stages. Usually expect technical and usage to go together, but want to split up into things that are implementable.

BN: Crucial to show a win at each stage.

JF: We reached out to specific academic people. E.g. Andy Wingo.

AR: They are all asking. Kind of a chicken and egg problem.

JF: We can concentrate on some industry sources

BT: Originally the silence was deafening, once we shipped MVP everyone came forward. For this one I think we need to do specific outreach.

LW: Facebook will go nuts for this, but they may not be able to help before then. Trying to poke them pretty hard.

RK: Definitely want to start looking.

LW: Might not be the most efficient thing but could be viable.

RK: As long as we have the anyref, then we can handle things like generics.

AR: It should be possible to implement everything in a possibly suboptimal implementation, e.g. might need indirections for aggregates. The next step adds more efficient ways to do some things (more memory-efficient, fewer casts).

JF: Specifically from .NET, the impression I got was, that everyone was OK not respecting the letter of spec for finalizers, while the other VM was not necessarily OK with that?

MM: Is there anyone that can speak to the Microsoft side of .NET?

MM: Mono is at Microsoft?

All: Yes.

JF: Get the impression that this doesn’t work for class .NET.

WM: This doesn’t work for what we are trying to do at Adobe. This is really about adding a new set of managed types, and with garbage collector. This does not help implementers who are trying to do their own GC.

BT: Trying to use the term managed data, but not catching on.

WM: This is orthogonal to what we are trying to do. Not looking for a set of ready-made GC that already have a definite set of semantics.

BN: Because of the lifecycle?

WM: It’s an old crufty codebase. Unspecified and accidental semantics that will probably never get the guarantees that we make.

AR: That it is kind of an orthogonal thing.

WM: Kind of get that this is kind of a misnomer.

MM: There is a bifurcation. If you allocate out of the WASM memory, then you have to manage it.

WM: These are typesafe.

MM: Typesafe and bundle in a garbage collection mechanism. If your finalizers cannot be emulated, that forces you to do everything yourself in the WASM memory.

WM: This mechanism is implemented using garbage collection, but this is not really about garbage collection.

AR: The GC MVP is the most interesting step.

  • Plain struct and array types

  • Mutable and immutable fields

  • Instructions for allocation and access

  • Checked downcasts (no generics)

  • No nesting, require indirection for now...avoids need for inner pointers

    • Can’t do the full C-level flattening of structs into arrays
  • Tagged integers? (feature completeness in AR’s opinion)

BN: settled on integers and not doubles?

AR: Tagging pointers is what most runtimes do. Kind of want to provide this.

MM: Is it observable that integers are not always boxed?

AR: No.

BT: NaN-boxing is mostly JavaScript.

JF: Except luajit.

AR: Take the temperature of the room.

LW: JS reflection for when these flow out to JS.

RK: Without introspection from the hosting API, we can’t move forward.

MM: I love it.

BS: Is this similar to what you presented a year ago?

AR: Yes, but there used to be a longer list. Split off reference types, trying to narrow down GC MVP.

TS: This is all with structural types.

AR: Yes. But that is somewhat orthogonal to the feature set.

JF: Can you take an action item for success criteria? Not being a GC expert, not sure what this will be useful for. Not clear what the limitations imply.

AR: They implement non-optimal perform. Can probably implement most statically typed languages minus weak refs. Might have some overhead for more indirections or more casts.

AR: Post-MVP

  • Nested structs and arrays, inner references

  • C data model, but distinguishing inner pointers

  • Dynamically-sized structs

  • Header fields or meta objects?

  • Means for eliminating more casts?

  • Abstract and nominal types?

  • Closures?

  • Threading with sharable references

  • Weak refs & finalisation

MM: What about sum types?

AR: Good question. Talked about ways to introduce tagging. The most general form would be further out. Can emulate with enough casting. Not essential for semantics. A high-performance functional language might need them. An alternative would be header fields or meta-objects.

LW: What about sums?

AR: Variants give you a bit more structure to do that.

MM: Radical hypothesis: all of the top 20 languages targeting WASM are either languages that either want to do their own GC, or they are a language that doesn’t need shared memory multi-threading. If there were two camps, would propose that only mechanism that supports shared memory is the WASM memory.

BT: There are ways of subsetting the JVM that don’t include finalization.

BN: .NET standard includes a minimum subset but doesn’t include some things, right.

RK: Could explore what are the things that we need to have the same quality of interop with the host. Would probably look as bad.

WV: For languages that don’t want to use the GC, would we consider to think about facilities for better GC inside the linear memory.

AR: The only thing you need is a safepoint mechanism.

WV: What about stack walking?

AR: C deals with it by putting in memory.

JF: It’s an optimization to do this, not semantically required.

WV: Is this not a valuable thing we can offer the embedder?

LW: Would be interesting to see how far these progress past prototypes. E.g. for threads you can just poll.

WM: Curious with shareable references. JS engines have historically been single-threaded. Did engine implementers want to get into the business. Is there going to be pushback?

MM: At TC39, a proposal to include fine-grained shared-memory, many people would declare "over my dead body".

AR: Would be possible to type and isolate shareable references to not escape to JavaScript.

WM: Concurrent access to JavaScript heap puts new constraints that these implementations have not had to face yet. Was easy to introduce isolates, but easier to not design for concurrent access.

AR: We have the pressure, so we can’t rule it out from the get-go.

AR: JS-API

  • How should managed values be exposed to JS?

  • ..not at all, proxify values

  • ..through the WASM JS API

  • ..reviving the Typed Object proposal

  • Avoid path dependency on TC39 for GC

MM: We should first ask what is right for the whole platform. Only if we find there is a problem doing what we think is technically right. Would say "unduly blocking". If the right forward was to have a path dependency, then we should take it.

AR: Once you buy into the path dependency, then it exists forever. Have to depend on TC39.

MM: We are already facing that with SABs.

BT: WebAssembly is part of the web platform, but also not. We explicitly separated the two for independent evolution.

MM: The typed object proposal seems very nice for programmers. Can’t give up on it yet.

AR: We should have a contingency plan, it should be fine to progress with the Wasm proposal itself, as long as this is something we might decide to do that’s ok

TS: Strongly in favor of the typed object proposal. Gives some history of previous proposals. A large part of the good uses cases would be around cross-language integration, so the path dependency inherently exists. Aware of the non-web embeddings, but just ignoring JavaScript is really not an option IMO.

BN: Is there a way that key TC39 stakeholders can be engaged?

TS: Want to get more agreement on the core semantics. Would like to give an overview presentation. Not asking for significant advancement in TC39. Not a currently staged proposal (was grandfathered in). Next meeting is in 4 weeks or 5 weeks. Afterwards would like to have a good overview of hard this would be.

JF: There has been concern from TC39 that WebAssembly is trying to do things behind the scenes. Value in removal the potential for misunderstandings.

LW: Could we make a subcommittee?

AR: Process should not cause feature creep from JavaScript into what we want to do.

TS: Don’t think that is a concern. Trying to remove some features not necessary for MVP, such as getting at the underlying array buffer. I think the types are the most…(important?)

AR: We might want to introduce type structure that might not make sense in JavaScript.

TS: If we can find semantics that is useful on both sides, then don’t need a JS API. Should try to find something that works for both.

AR: ETAs: reference types: now, function references: ASAP, GC MVP: this/next year

LW: GC has a lot more unknowns. Is there a reason to bump them?

AR: One reason is that function refs are an easy case of typed refs.

MM: The WASM mechanism is designed as defensive sandboxes. Two module instances that don’t share memory or tables can only interact through imports. However, indirect calls between modules require sharing a table (same index). Import and exports of functions across modules is inconsistent without typed function references.

AR: Implementation should be simpler.

LW: Imagining what the optimal thing would look like.

AR: With anyfunc you already have that. It’s just adding a type refinement.

LW: Could use different representation

AR: Coercive subtyping, a world of a pain forever.

LW: I am thinking downcast.

AR: As soon as you have depth subtyping, then subtyping across structures, then you’d need to copy structure...bad.

LW: We had demos and and browser preview. Do we need those steps, aim for these?

BN: We need the moral equivalent of Emscripten.

RK: The production .NET is more constrained. Strict expectation of being able to run closer to spec. For Mono, it’s OK to skip weak refs or finalization.

LW: How long would it take?

RK: It would take a while, because of how the runtime is setup. Need to change clang or LLVM. Need to that get beyond a small demo. Even if we just hack around, on the order of months to get it.

DS: GC support in C?

RK: Runtime is thousands of lines of C.

DS: Every C pointer has to be visible. Need real reference types in C code.

BN: The GC extensions to bitcode. Who consumes them?

DS: Azul. But they are safepoints to do your own GC.

RK: We had a student do a managed C++ extension on top of clang. Might be able to resurrect that.

AR: Dart is interested.

JF: Slava?

AR: Hated WASM until he saw the GC proposal. :) For Dfinity, we will build a simple language that could target this.

Action item: Andreas to list success criteria / target users for each step of the split GC proposal.

Thursday

Testing Returns

JF: strongest opinions were Ben and Luke. You each get two minutes to restate your case.

BT: high order bit - spec repo should be source of truth. Blessed not in the sense that they work everywhere but that they represent the state of the spec. From previous discussion I thought tests would start in spec repo and then be copied over to WPT. My understanding of Dan’s point is that de facto source of truth would not be spec until some later time.

LW: I like the idea of all tests making their way into the spec repo with some regular, not automated cadence. We should make a distinction between initial tests and regression tests for weird corner cases. WPT is an easy place to accumulate these regression tests. Eventually these would make it to the spec repo.

JF: You want to distinguish between high level functionality tests and weird corner cases.

LW: Spec repo has initial tests that came with proposal. WPT collects more detailed and complete tests over time.

LW: Nothing rules out writing additional tests into the spec repo as a PR, but WPT gives another route.

BN: Where do regression tests belong?

LW: if regression is found after the fact, start in browser.

BT: Three way sync…

LW: two two way syncs

BN: how pretty do they have to be?

BN: where would a test from clusterfuzz go?

AR: We want to distinguish between implementation bugs.

BN: Some of the fuzzer tests are ugly.

LW: Maybe we don’t want those. We should try to distinguish the essential behavior of these tests.

AR: Who would be responsible for prettifying and moving over tests from WPT?

BN: why prettify these?

BT: I had an idea for a shared regression test repo

JF: So wasm-spec-tests and wasm-regression-tests? High prettiness bar for wasm-spec-tests. Wasm-regression is anything goes.

JF: Let’s ignore the spec for the next minute. Are we all okay with wasm-regression being anything goes, review process is pretty lax, but leave out flaky tests. Can we agree on this?

BT: yes.

JF: So we can just shove everything there. It doesn’t really matter. Do we have unanimous consent? We agreed on something?

JF: So the only thing that remains is the spec tests. It sounds like we want these pretty well organized. It sounds like you do want less friction, just bang them out. And then have one big review. Am I characterizing this right?

LW: We should expect tests written so that you can read them later.

LW: we’d be happy to involve people with an early review, but usually people aren’t eager.

JF: It sounds like you're willing to make this work for everything.

AR: The spec test repo should be structured in a reasonable manner, not random chaos.

BS: The current core tests are decent, but not pretty. It’s not a high bar to check in there.

AR: We should have high level structure, one test per language construct - it’s not always clear, somethings are cross cutting

LW: Defaulting to adding lots of small files seems nice. Less merge conflicts.

AR: Let’s avoid that. We’ll have a zoo of files.

JF: We don’t need to figure out exactly how to break stuff up. Luke sound willing to have that discussion. Would you be okay with reviewing that?

AR: The meta concern is making sure people who commit to WPT do the work of carrying them over. Otherwise, WPT will become the de facto source of truth.

BN: Who is an appropriate reviewer? Should you and ben be involved in every review?

AR: I’m not particularly interested in reviewing these.

BT: It’s not a code review. It’s a lightweight review to make sure the test makes sense.

LW: A lot of times I just run the tests if it’s a thousand pages of micro details.

BT: Yes, not a code review. Eyeball does it match spec and cover the area.

AR: Yes, and test organization

BN: How would you think of a test that’s red on WPT in V8? Would that make you suspicious?

BT: not necessarily, knowing our implementation.

BT: The spec test should not be incorrect. There's a higher bar

AR: most of the JS api tests revolve around JS trivialities. Very little interesting functionality to test here: growing memories, instantiation (LW: mutating tables). Rest belongs to core suite

BN: What about tests that cross boundary?

LW: that’s where we need lots of tests.

BT: We’d really like to upload our mjsunit tests

JF: It sounds like we can agree on review. We want people tangentially involved from each browser

LW: yeah. The first few tests will establish precedent. Then periodic merges which we’ll make sure happen.

AR: how do we make sure that happens?

BS: Make it a part of the contract - also put it in the spec repo

AR: by that token you could ask people to put them there right away

LW: They’ll be writing some, we’ll be writing some

AR: general review that people write tests are responsible for merging over

BN: worried about short term or long terms?

AR: no, I trust these guys. But what about other random people?

LW: it should be possible to have a trivial dashboard that compares directories in WPT and spec repo

BS: so there’s a second step that copies back to WPT?

LW: sounds like there are two directories. Regressions don’t get synced and spec tests do

AR: Are you hypothesizing about some automation that we don’t have?

JF: The automation is manual. Some person is doing it. If you pinky swear you’ll do it...

BT: I was under the impression this was technically possible.

BN: 2-way sync has a precedent into chromium, we don’t have into v8, 2-way sync into the spec repo is novel

JF: lets ignore automation. history shows it doesn’t happen. Let’s assume it won’t. That human is Dan or someone from igalia. Are we comfortable with that?

BT: I can see that happening practically, regression is anything go want to reduce friction there. Why would we allow bona fide spec tests in WPT? Why not land in spec repo?

JF: Ignore WPT, it doesn’t exist - a test comes to you - do you care if it comes from WPT as long as the test is there?

AR: You might get fewer tests in the spec repo

JF: Assume good intent

AR: Will that be true for every contributor. Will they be incentivized to dump in regression because that’s easier?

BN: They probably will.. They may or may not care to land the test in a way that matches the style, if they’re encouraged to stick into the regression directory, let’s make it easier to write tests

BS: Can we throw money at this problem? We have contracts. Let’s pay people.

BT: We are paying people. They don’t want to do that.

JF: What’s the scale of the contract for test writing? The whole web spec and js api?

LW: Yes

JF: If a rando contributor comes in and says igalia missed this spec corner case. Does that ever happen?

AR: That has happened in the spec repo, i would expect it to happen to the js api as well

BN: How like is it when they first show up they don’t show up with something the found in practice? A lot of it will effectively be regression tests.

AR: I can only extrapolate what i saw in core tests. Most people seemed to be vm/compiler implementors who found missing cases

BN: If Dan sorts all this out, and we compile regression tests - if a bunch of stuff shows up in the regression directory, that’s a bad thing - why should that happen?

LW: We will always have regressions

JF: three minutes left

BT: thank you

LW: PRs happen to spec repo with assumption that steady state is in sync?

BN: can we have a scary readme that says you have to propagate the tests?

LW: yes, encourage the right thing

Wouter: people are motivating by their name. Maybe having hteir name on the PR would help

AR: They also have to sign up for the CG etc.

BS: Yeah, that might be turning people away

BN: I understand the concern. WPT is sort of viral. There’s a danger that we get sucked into that

JF: one minute left

AR: in all practical terms this will make WPT the source of truth, no matter what we say here. Because that’s what people go to to see what tests are there and where to add them

BT: worries me too but willing to roll dice to make forward progress

LW: core spec tests is still in core.

JF: let’s get forward progress. I understand you’re [AR] are grumpy. Let’s officially note that

AR: Yea, it’s just JavaScript

JF: Let the notes reflect that pinky swearing happened.

Pinky swearing happened.

Wasm JS Interface

slides

MM: One pervasive issue that i want to caution this group about is that by using WebIDL as a spec language leads to bad JSisms. I don’t know how much of this is because of WebIDL.

AR: None. We didn’t use it at first.

MM: Okay, I’ll ignore webidl as the cause and focus on poor ergonomics from programmer perspective.

MM: instantiate method is overloaded on parameter type for understandable reason. Trying to be liberal in what it accepts for first parameter. Also instantiateStreaming. Notice output signature for second instantiate overload is different. This unnecessarily propagates uncertainty about types. In p = instantiate(x) if you don’t know x you don’t know type of p. You should alway snarrow uncertainty and not propagate it.

AR: For the record, we discussed this, I argued against this. I don’t remember how we got this.

LW: It was a last minute scramble. I tried to change it too. We all hate it.

MM: If it’s too late to fix then that’s okay. I’m used to this in JavaScript.

LW: on the bright side, the recommended instantiateStreaming does not have this issue.

MM: Irony with instantiate streaming, is that it has the same signature as the first instantiate - similar in the content. Having those two overload would have been fine

BN: we talked about this too.

AR: one reason we did this is no one suggested a name we all liked.

BN: We were worried the streaming api wouldn’t look definitive.

JF: Streaming API came very late in the process

MM: But this one came early.

JF: Earlier than streaming

MM: I don’t know the history and i accept that there’s no way to fix a lot of this. I want to raise this so we have a better sense of ergnomoics for API for dynamically typed language and pay more attention to this.

JF: We could fix with ugly valueOf thing, should we?

MM: I don’t know

BS: let’s let him move on

MM: If we really wanted overloading we could have made the output signature include a promise for the module even if it’s coming in. It doesn’t hurt and it would have resulted in not propagating type uncertainty.

BT: like a monad!

MM: I’m not getting into that

MM: another thing that’s natural here is that in some sense the byte form is doing a compile behind the scenes. It would have been natural to have instantiate accept a promise for a module since all of the other methods return promises anyway

JF: what about a promise of a module and instance, like the return values?

MM: I don’t know, I haven’t thought about this.

BT: ...arbitrary indirections

MM: instantaiteSTreaming could have been an overload

Wouter: What about no overloading?

AR: We had different names but everyone found them ugly.

MM: naming is hard

MM: should pass alpha renaming style test. Independent of names it should be a good api

MM: module constructor is weird. Existence says we are willing to synchronously compile. The point I thought was that everything could be done in parallel.

AR: Sync compilation is essential for JITting on top of wasm, for example

BT: also asm.js compat

MM: so if sync compile is essential did we intent to make this split and align it with webassembly.compile or new module. That seems like a strange distinction

AR: Constructor is always sync. What else could it be?

MM: given that one is used to looking here [slides] for async compile…. i don’t know. If you provide the constructor then I agree that constructor should be synchronous. These other apis are factory methods. Maybe we should just use factory methods everywhere with similar but non-overlapping functionality.

MM: so… this is also very weird. The Module object methods are instead static and take a module as the first argument. From OOP perspective, this is bizarre

AR: You can blame me

MM: Andreas, I blame you!

AR: motivation is this is reflection, mirror-style

JF: what you’re saying is you don’t like a lot of the design?

MM: I’m raising issues I see.

AR: It’s following precedent object reflection

confused looks

JF: carry on

MM: class instance if very straightforward. I like it.

JF: It’s synchronous though

laughter

MM: to what degree is it still an engineering criteria that we want to delay compilation?

JF: Some people do it at runtime

MM: for wasm namespace api is there still a need given an actual module for instantiate to be exposed?

BT: we do a non-insignificant amount of work at instantiation time, so it makes sense to be async

MM: one thing strange about semantics, not just interface, is that two level lookup happens for all imports. That’s reasonable, but there’s a cost and also observability. Second request for import could result in a different value. Instead I’d recommend each lookup happens once. Reuse top level records that have already been looked up.

JF: We could change this if people think it’s better. I could go either way.

AR: I don’t think it’s better.

BT: It’d be a minor performance win

MM: I’m neutral, raising it as an observation

JF: I think the reason we did it the way we did is imports are declared linearly. In my impl then we just go ‘for one thing, for the other’. Doesn’t matter, it’s just the straightforward implementation.

AR: Performance cost is irrelevant

BT: max 50% improvement because you save one lookup per import

AR: This is the instantiation operation, much more costly anyway

MM: No one really advocating for change here, so ok

JF: it’d be a behavior change, but not a big deal

MM: class memory is straightforward, but good moment to talk about other aspects of how prose spec is phrased. Our spec sits between three different styles of specification. This is necessary, but we have to make compromises. Are we making the right compromise? Three spec styles are wasm core spec, w3c spec conventions, ecmascript spec. Two main pain points: treatment of mutable state and terminology and distinctions. Wasm core spec is very different. It’s a store passing style, as if writing interpreter in purely functional language. Wasm Js spec confusignly tries to accomodate this, but doesn’t take all synchronous state as part of synchrony and flatten into global store. The only reason for global store in store passing style is to show when store changes. With imperative spec in metalanguage, having both styles exist is bizarre. This has led to several logic bugs that Dan fixed that were logic bugs from standing between these two words. Sampled store,modify it, pass and set it back and then overwrite modified store.

AR: Easy fix would be to have a wrapper around the function that sets the store..

MM: I think that would be wonderful

AR: I don’t think it’s a big problem

MM: I think it’s a terrible problem. It obscures separation of state. Using metalanguage state to describe object language state. ………...

AR: I would argue to the contrary, this creates a clean separation between the embedded state and the embedder state

BT: if both specs were modeled in store passing style then….

MM: State passing style flattens all instances into one state. Right? That’s insane.

AR: But in the future we need to share things like shared memory

MM: when you share memory, share memory.

AR: Right, by giving it a common reference to the store.

MM: I understand why you’re driven to a store passing style, by using monadic style, i.e. using the wrapper clarifies the spec

AR: That would be just a wrapper

BT: isn’t this just an implementation detail? You could have per-instance

MM: it’s a semantic issue

AR: I think I see what you’re getting at, thread separation. I’d argue the problem starts with ES, which doesn’t have any clean way of describing threading and stores in the first place. It assumes entire complicated imperative multi-threaded metalanguage without ever defining anything. Threading is never specified and you assume different threads of computation going on. Wasm spec tries to model state precisely and explicitly.

MM: I have no problem with saying JS spec is horribly stated and hard to reason about. Attempts to formalize it use a monadic style. Separation logic...

AR: Separation logic has exactly one store and a way of factorising in different statements. It’s not factorize by construction.

MM: JS cert style might be more what we’re looking for. They’re trying to follow phrasing and locality of ES spec

AR: do they include threading [MM: they don’t]. That’s the issue. It’s super magic.

MM: i don’t think this is trivially solvable. Two extremely different ways of fixing spec, can’t fix by just gluing them together.

AR: there will be patching involved

MM: minor incoherency: Wasm JS spec uses W3Cisms like queue a tasks but JS has no tasks, only jobs. I think what you mean here is an ES job, but Wasm is meant to be used outside browser, but task is browser-specific.

next slide

MM: this is kind of a bug, but also something to talk about.

BT: table doesn’t have buffer, that looks like a memory

MM: my mistake. Get and set are stated as taking delta rather than index. More interesting is that constructor takes enum that can only be function but since this is forward looking why not just do it now?

AR: could be object

MM: In wasm spec language we talk about element types

MM: lessons for TC39. There used to be no way to make new error classes. ES6 lets you extend builtin classes. We can lift the restriction on subclassing Error now. No reason to frown on other specs doing this since user code can do it and shim this.

MM: resource exhaustion. This is a tc-39 issue. Must trap/abort. Never throw. Happens at unpredictable time and no possibility that you can restore consistency. Instead, it should reliably trap or abort such that what’s left is reliably consistent.

JF: I want to understand. In wasm we have trap. Is that what you’re talking about? Trap can happen for OOB, etc. In JS embedding trap becomes exception.

MM: that’s my objection

JF: but divide by 0

MM: divide by 0 is okay

JF: for resource exhaustion, stack overflow, grow_memory.

AR: We leave that up to embedding. We throw our hands up and let javascript deal with it

MM: how? what does JS do?

MH: I’ve seen production Js apps that rely on catching stack overflow. We tried to make it a trap and windows apps broke

BT: functional languages do this. Grow stack and simulate once overflow.

JF: Original design was when we stack overflow, do what JS does, I don’t know if the spec says that right now but that was the intent

MM: who sees the thrown exception? you’re inside wasm, have wasm stack.

JF: no way to handle trap from wasm. Only embedder can do it

MM: okay, so thrown exception is at point that wasm is entered from JS

JF: at closest entry

MM: what if it reenters wasm?

BT: For now, think of wasm frames as JS frames

MM: more interesting once wasm can catch. Traps for which no hope of recovering consistency should lead instance inoperable.

JF: defined to leave memory and tables in tact in JS embedding

AR: why the instance, that seems arbitrary

WM: Why does stack overflow mean inconsistent, incapable of recovering?

BT: assume stack overflow occurs at specific places. Most implementations do this, but not required.

WM: when stack overflow is defined as exception, you make sure it happens at predictable locations

AR: No reliable way to be defensive, impossible to protect

BT: no such thing as well defined place. Could occur anywhere, because of Js calls to runtime that can trigger stack overflow.

AR: could happen again in handler

WM: yeah, double exception is programming error. Did not arrange to throw far enough

AR: No notion of double exception in the language, you can’t rely on being able to..

JF: I’d like to move on

MM: I can briefly answer AR’s question. unit of tainting / becoming inoperable is interesting question. What’s minimal unit? Only unit I know of is agent as a whole.

AR: actually, agent cluster if shared memory

LW: stack exhaustion seems hard. grow_memory is well defined and can be interpreted.

BT: grow memory is not an exception

LW: in js it is

JF: Different when called through Js, JS throws, wasm opcode defined to return -1, otherwise old size

MM: those are fine. problem with OOM is in anticipation of Wasm gc

MM: presumably instructions that allocate

JF: yes, but those don’t influence memory

JF: tables can grow

MM: i don’t have a problem with flat address spaces like tables and memory. Wasm gc has much more direct mapping. We should assume causal mapping with infinite memory. OOM should be unrecoverable. Would be nice to have mechanism that says "this wasm unit is corrupt" which guarantees no further user code executes in that unit. Any attempt to enter throws.

LW: Another good place is the slow script dialogue. They should kill it in specific places.

MM: to reiterate and cut off discussion. This issue needs to go to tc39 first, then we have a place to stand to readdress here.

MM: wasm web interface: very little to say, I’ve already talked about this.

MM: I really like developer-facing display conventions section. Good show! must be non-normative. Another lesson for tc39

JF: shortcoming is if not url but bytes then this looks bad. I added a hash, but stil hard to tell what instance.

BT: You could use the module name

JF: That’s what I did

Discussion about module names, function names etc.

LW: convention specifies how names and locations appear

JF: if don’t have url and don’t have name section, this is painful

Crash Reporting

BN: what prompted this is at GDC I ran into company called backtrace that has customers who want wasm crash reporting. They’ve done some crude support for what’s possible currently. Curious what’s possible. This is mainly for C++ code, where there’s an expectation of being able to get field crash reporters. Potentially lossy trace of stack, not usually a full core dump. Some possibility for remote debugging.

BN: Can we do something similar for what native apps have? Need to be careful about what we expose in trace. error.stack is limited, but we’ll have to be careful. We’ll have to tolerate incomplete information. Question about whether this is pure wasm or if there are JS issues too.

BN: I don't know about non-c++ languages that collect backtraces in the field.

BN: right now we have error.stack, possibly with names. Some systems have local variables. Backtrace would like these. Chrome uses crashpad/breakpad, which uses windows minidump format.

BN: could glue something in to error.stack. Also System.getStackFrames proposal.

BN: get stack frames, add isWasmFrame, array of locals, params, stack slots, globals. Placeholder for missing (undefined?)

JF: clarifying question

BN: the idea is to piggyback on something already existing.

MM: getSTackFrames intentionally only provides control, not data. If we add data, then that would need to be more privileged.

BT: what does control information mean?

MM: Program counter.

BT: greater level of privilege, interpreters.

MM: beautiful way to think of where you need privilege over what

BN: question around when exposing data, to who, under what circumstance?

BN: underlying motivation is to send failure from field with minimum size. For native apps you rely on knowing size of stack frames, frame pointers, native calling conventions to relatively reliably walk stack

BN: wasm has dual stack. "hidden stack" is the native stack, “shadow stack” is generated by tools. For dual stack, shadow stack is often in complete. Frame pointers live on hidden side. Can lose the trail. Inlining also changes assumptions about size of frame, can lose trail that way too. We may need some constraint on what engines can do to ensure walking both hidden and shadow stack is possible.

JF: i don’t understand what you’re trying to expose to users. Are you trying to show every local?

BN: Not necessarily robustly every local, whatever makes sense for the engine.

discussion about tail calls and frames

BN: don’t always have frame pointer for everything

JF: just say "i don’t have your values"

BN: but don’t know how to follow the rest of the stack

JF: but you can find your way back to JS, right?

BT: Why isn’t there a chain of frame pointers

BN: Performance pitfall

BT: how receptive would they be to opting in to marking locals?

JF: Particularly volatile locals?

BN: Hint to engine saying "hey, keep these frame pointers around"

general confusion about why frame pointers are needed

BT: why not keep all the frame pointers?

….

BN: need shadow stack because values have home location there

LW: Are we talking about a future where we can see the native stack and iterate over it?

BT: yes

LW: this is useful for roll your own gc

BT: can you make hidden stack unnecessary? Is it just being repurposed for crash reporting?

BN: This is how our C++ implementation works

BT: that’s orthogonal to whether you can see them for crash reporting

BN: Depends on what types there are, if there are large structures..

BT: what i’m saying is the mechanism for seeing certain locals is orthogonal to C++ implementation for shadow stack.

BN: In general yes, to the extent that we want to offer incomplete information it has a bearing on how we express it - we have assumed that we don’t need enough info on linear memory - you can pay the cost..

BT: we’re probably talking about things. If you can ask the engine to show you one value, might as well ask for n values and then you don’t have to put them in the shadow stack.

BN: there are values you might be able to move out of shadow stack. There could be perf impact

BT: Yes, but it’ll probably be cheaper to put them on the shadow stack yourself

BN: interesting things as far as tradeoff between. Can’t eliminate shadow stack because of types.

BN: spec people, implementers, how okay are you with lossy tracing API?

JF: What are you lumping into tracking? Do you mean every frame?

BN: i mean locals, parameters, etc. to the extent that engine can offer it

BT: if we offered an api we should make it complete and precise.

LW: We’re almost doing the same thing when you talk about precise tracing, we’ll be saving a lot of metadata or none of it

BT: it’s deopt info that you have for JS

LW: you save all of it or none of it. Best effort means you lose everything.

BN: concern i have is that people doing this want speed. We don’t want runtime overhead.

LW: not runtime overhead, metadata.

BN: If the API has any significant overhead.. They don’t want all the things

BT: we’re charging them. They’re asking for us

LW: they want to enable this in release mode. 5% or more is probably not worth it

BN: I don’t know what the threshold is

JF: we’re talking about making a mode?

LW: They might do a build that recovers a lot of locals, or some of them

JF: so not every stack slot of every program is recoverable. They can tag locals

BN: It is opt-in for locals

LW: opt in for local

BT: we could do best effort, but if we offer an API then we should make a contract to be precise.

BN: For all locals, or the ones we opt in?

JF: not just metadata. The data could be lost by register allocator, for example

LW: definitely not default, don’t increase default metadata

BN: The potential is that it’s typical to embed a crash reporter, this is expected by default

BT: other backstory is that for dead code we want to do best effort as a debugging step

LW: for devtools we have a signal that use has opened debugger

BT: we could do a lower tier. Don’t need to bikeshed to depth

JF: then you have gc safe points

LW: this might be gc related

BN: Is there discomfort with a lossy contract?

BT: yeah. if you ask for it, we’ll do it. Otherwise they’ll complain

LW: Use the lossy contract to build a GC?

BN: mapping to c++ makes assumptions about when stack_top gets adjusted. engine doing inlining couldn’t hide that in the trace, which creates a potential problem.

BT: by default engines that inline should make this totally transparent. The hint should say "you can inline this and i don’t care if i see it"

JF: i don’t like hints.

BN: so don’t inline?

JF: no, do whatever you want

BN: takeaways: opt in might be okay. for LW, BT uncomfortable with a lossy interface - JF?

LW: For GC that’s what we’ll need to do

JF: i’m not sure what you’re asking? Should be per value

WM: GC wants to not miss any variables, but it doesn’t care what frame they are reported in. If inlining causes variables to move to another frame, GC doens’t care, at least for conservative GC. There’s not enough info here for exact gc

LW: We could do exact GC, if you were able to observe locals exactly, each frame would see the subset of locals.. Then you could do a precise GC

BT: or change. security implications

LW: three bits

LW: nondeterminism is distasteful

BN: Conundrum - native crash reporting you get a lot of stuff for free, vs here...

BN: strong preference for determinism

BN: edge folks?

MH: seems like a lot of work

BN: marketplace is already asking for this

BN should we standardize this? error.stack is already fuzzy? We don’t have to standardize. Any strong preference?

BT: we should standardize so it works cross browser

LW: if precise, deterministic took a lot of time, i could imagine best effort, but not an ideal resting place.

BN: opinions about where to attach the feature? Is it JS, or somewhere else? C++ has more of an expectation of being able to do this

DE: would the specification status of error.stack be relevant?

BN: That was on an earlier slide, it would be - MM’s getStack proposal if we stuck this on the JS side

DE: I would prefer that we coordinate with the JS efforts, because the stacks are mixed, and there are non standard apis in use here - at least for err.stack - there’s a similar utility to getting this working across JS

BN: might be value in doing across JS, but Js devs have lived without it for awhile.

BT: I consider this to be an embedder api - we should have a standard embedder api, with a sort of a JS flavor

BN: fair point. You might want this in non-web embeddings. Fastly, do you want crash reporting?

PH: yeah, but we have a different privacy and threat model than browsers, so we’ll be keeping any sort of detailed information internal. Not sure what we’ll expose to users yet.

DE: web and nodejs could have common interface, coordinate with tc39.

BT: I think this is functionality that should be coming to all apis, shouldn’t be owned by the js-api - there should be a common embedder thing that comes to the js api

BN: if we added… facilities… in the same way we’re bolted in… yeah…

DE: if js api embedder of webassembly we want a common way to expose it

BN: there’ an interop issue, we should think about how to clean it up to use in the JS world

BN: okay, that's helpful. no concrete proposal, but this is helpful to get started with. Edge expressed concern about implementation. What about others?

BT: We have this functionality because of Turbofan, but yak shaving needed. About a quarter of one engineer’s work.

LW agrees

BN: apple?

JF: Once it’s designed, implementing is not hard. We have OSR, so register allocation changes are easy. I assume it’s trivial.

BT: you have the same functionality for javascript

JF: we don’t do tiering.

Wouter: if we’re doing to dump variables in crash logs we get scalars and addresses. Go beyond that? Can we say a linear address maps to a specific type?

BN: existing plumbing in crash reporters to figure out what is useful to go into crashpads

Wouter: these are crash reporters?

BN: breakpad

LW: so you mean something running in wasm?

BN: Talking about running crashpad/breakpad within wasm

BT: think of wasm as a machine. crashpad takes registers, etc.

BN: No, those need to go back to browsers, CDN, had some internal folks to try to talk me into it...

LW: it does happen, browsers oom, on specific urls

BN: There’s maybe something to it, but lots of privacy peril

LW: if site-isolated then maybe…

BT: sudo give me your memory

LW: you don’t get that with user breakpad. You don’t get crash reports when breakpad crashes

BN: maybe when we’re all super site isolated.. but even then...

BT: Implement x86 emulator in wasm..

BN: there’s undoubtedly a legal contract with crash reporters...

LW: Just a count of how many times you’ve crashed, start with the minimal possible thing. You could imagine a step by step incremental process

dark thoughts

Update on tooling + Math + bikeshed

Brad Nelson

SLIDES

BN: We’re trying to fit our specs into the W3C’s expectation of what their docs look like. Historically they want their docs to be on the web, formats like PDF are not acceptable to them. Various standards have used various hacky scripts.

Bikeshed is a tool that targets the format the W3C expects. Pile of hacky scripts. Takes in markdown(ish), or raw HTML. Has a ton of magic, bad separation of concerns, the bikeshed repo itself knows about specs that are using it…

JF: But it works right. I recently added support for C standards committee to bikeshed

BN: what it has going for it: if you go with their formats they have checkers, which vet that you’ve used a tool not unlike bikeshed to produce your spec. Lot of effort to pass those checkers independently, without bikeshed. They were authored by same people as bikeshed (probably). I did something yesterday that broke all these rules after previously having reached 0 errors. The remaining errors have a small set of root causes. Thats it on background.

BN: Heres how we get through the pipeline to produce w3c core spec. First we take RST through sphinx, get single page rendering HTML out. The mathjax is in this, but it will fall over because there is too much of it in here. To get it through to destination format, theres a small hacky python filter that fixes up things that confuse bikeshed. (In principle we could fix bikeshed but we have not investigated this completely).

BN: After that bikeshed chews on it and spits out HTML with mathjax in it. The mathjax confuses the pub checkers. When you put all 4000 equations in a single page, chrome crawls to a halt. It could go through MathML for browsers or to SVG (which takes a long time to chew on) but when targeting html/css, it never really seems to finish, always keeps reflowing.

BN: so to get us out the door I made another hacky python filter that converts the mathjax, through katex, and embeds that instead. Katex itself is not structured for easy invocation on a document, its for fragments, so I’m spinning up a lot of processes to do that. This is why my laptop has been grinding and spinning audibly. Takes a while.

AR: Some of these hacks are not just to get katex to process but to workarounds for the latex?

BN: Yes there are two categories - work around limitations of katex, solving things pubchecker screams about. There are two versions of the pubchecker and you have to use both.

BN: Katex is a small subset of Latex thats just for math, it has weird gaps like macros (which i made a reimplementation of), generates one span per character because they really try to override what the browser wants to do.

BN: In firefox the spec looks especially nice, its buttery smooth to scroll. In chrome its not too horrible, a little bit less buttery smooth.

AR: On my old laptop it froze the browser

BN: It isnt ideal but there are active folks working on both, Kahn Academy is working on Katex and they are working on making it more complete.

BS: Have you told them we are using Katex?

BN: I haven't explicitly told them but i've filed issues. We hit a bunch of edgy things because most people are using far fewer equations.

JF: Bikeshed now has a script that adds your spec to the automated regression tests

BN: Another bikeshed thing we havent tried, there’s a bunch of manual stuff W3C people did for me on the first working draft, there is ostensibly a way that bikeshed can talk to their servers but I haven’t tried it.

BN: Bikeshed has gaps around pdf output as well. Talking to Tabb and working on a syntax for other specs that just want to use katex with bikeshed could use, so we could work on cacheing or parallel running of the tool. Out of curiosity Andreas how close does that come to addressing your concerns?

AR: The sphinx syntax is terrible anyway, so doesn’t matter

BN: Unfortunately the current situation is a bad place in the long term. It gets the spec out the door. Going forwards we need to get out of this. Having Tab involved in transcription of this stuff is helpful, we also need to solve multipage support and pdf output. Multipage is a unfortunate story in terms of the other story. Other specs do have multipage but each use their own hacky scripts to chop them apart.

JF You should use a service worker to do that /sarcasm

BN: I like the one page view but if you have the wrong laptop or a mobile you are in trouble.

BN: The pdf support is a philosophical issue, he says you can use the current stack and use the print option to get a PDF, [shows pdf that looks not great]

JF there is CSS for printing

AR: Print is worse than a word doc

BN: Short of something extremely thoughtful, all the issues of producing print documents, the printout is 80% of what we’d achieve in the best case. My offhand take is I’m somewhat with tabb, it would be nice if we had a really elegant doc but it has to be a web thing. If we’re going to care about a pretty doc we have to swing in the totally opposite direction. I have two polls on the opposite sides of the spectrum

BN: We could go back to the W3C and say we should transition from Sphinx to pure Latex. W3C wont be thrilled with this view but maybe we as a WG are unusually valuable to them, so maybe we would get away with it. I would expect that there would be disagreement from the W3C on this.

AR: The whole markup thing as an input option is a huge pain

BN: Its an aesthetic thing but if the programmatic structures and flexibility of the tool is very powerful, the input format would be far more understandable as latex, but we’d be swimming against the tide.

BN: On the other hand we could go the bikeshed route, we’d have to swallow having no pdf form but we could make it a lot less hacky if we put some energy into it.

AR: It would have cost me half the time if i had done it in latex. This markup does not scale. E.g., there are no macros at all, other pain points

BN: If we had use latex we would have had to gone back and said, we will not pass the W3C checker, we’ll be a special flower.

AR: Bikeshed isnt a real document processing system, mostly regexps over html docs

BN: It does use an html parser

AR: It seems unlikely it will ever scale to real documents

JF: HTML is a web spec, PDF is just PDF.

BS: we’re just talking about the core spec.

BN: We don't have to put the other specs in here, we are not entangled with the other two specs, Javascript and Web APIs.

EH: What's the state of Latex to HTML?

BN: Its very fragile

AR: Some webpages that use it look quite professional but I don't know how they pull it off.

BN: The layout of latex is at odds with the layout of web pages

EH: Could you modify some existing template to

JF: Has google volunteered labor to do any of this

BN: Tab and his group will make sure bikeshed handles output of latex via katex to docs well.

JF: Is there headcount for any other transformations

BN: My hope is that the transcription work will be automatable. If we have the tool I can rustle up the manpower to do the conversion to bikeshed syntax. Fundamentally with bikeshed there will be things missing.

AR: How much of the pipeline is working around issues with bikeshed and katex that will go away over time?

BN: Some of these hacky scripts just got introduced a few weeks ago, one should be possible to folded back into bikeshed. I’m worried there are things in bikeshed I don't understand, I need some cycles from Tabb to understand the options there better.

BN: For the mathjax to katex filter, half of the lines of code are just plumbing to run katex, so that would be folded into bikeshed with the proposed syntax. The remaining is because we’re putting things through sphinx, dealing with getting that to meet the pub rules. If we transitioned to being totally inside bikeshed those would go away. There’s a handful of things that have to do with limitations of katex (like lack of macros) and would maybe be amenable to taking patches.

AR: And there are some things in katex that just don't get typeset properly

BN: Yes there are known bugs. Some of these goofy formatting things are just the way the w2c format works. The array thing is just a missing facility in katex. Maybe it's not technically in the strict math subset.

Does not underline links in math

AR: I hate link underlining anyway

BN: We can but I don't know if the pubchecker will scream at me

BN: The current way is untenable, Andreas would have spent a ton of time less if it had been designed better.

BN: One issue is that some of our output is kinda programmatic where it should be human generated, so maybe we can get rid of some redundancy in our spec. Its hard to get programmatic output to follow all the rules

AR: I would prefer to stay where we are and hope that some of these hacks will go away.

JF: V1 we are not going to change. Lets revisit later when we’re done with V1.

BN: If we’re not going to use bikeshed to do the math then it won't be worth having Tabb go in and make those changes. If we’re not going to do that we should let him know.

AR: I thought bikeshed would take care of the inline math

BN: Then we’d have to change all the math to a format that bikeshed understands. We could push him to take exactly the sphinx format. Maybe we would use that for just putting a math fragment in our JS spec. Fold more of these hacky scripts into bikeshed.

AR: There are a lot of forks of the spec right now, so any conversion of the format would be both our spec and every single fork out there. Lots of work, lots of conflicts.

BN: We could make an automatic tool but its not clear how well that would work out. If we are consistent enough in our practices we could keep the hacky scripts.

BN: Anyone want to advocate strongly for transitioning from sphinx to pure latex, we need to do it now if we do it at all

AR: We can't afford to make major changes to the way the spec is written currently because it would make major pain for every ongoing proposal

EH: I don't think anything else is going to be much better to justify all the pain of switching

BN: This will happen maybe by inertia? If he shows up and has part of the spec converted then maybe we will do the rest

AR: I don't think we can take the polls without anyone having tried either approach

BN: The assumption with the latter poll is that Tabb is willing to put in the facilities to get the latex all the way through bikeshed. We have an opportunity to be influential, if we don't do anything then math will continue to be hard in other specs, I have mixed feelings whether bikeshed is good in total for the influence but it is widely adopted

EH: Other groups may want math so it would still be worth it for Tabb to put in the effort to support math.

AR: As long as katex has problems and bikeshed relies on katex, there's two programs on the path dependency

BN: Which reviewers reviewed the bikeshed output (6 people ish) which reviewed the sphinx output (just EH). I encouraged them to review the bikeshed output.

EH: The advantage of sphinx is i could print out just my section. It was hard to say what pages I wanted out of the giant bikeshed output.

BN: Right now we link to the sphinx version, which introduces additional options. I want there to be one definitive document to reduce confusion.

LW: The rendered HTML goes a lot faster for me. As long as there are not glaring problems with it. For a while it had issues

BN: Yes that's the one that counts.

JF: Goal should be that we should review the one that gets published

AR: This slowed my laptop, and I’m sure it wouldn't work on mobile either, Aesthetically it's still inferior.

BN: So I just wanted to change where we direct people but there is this loading issue. The other issue is whether we put energy into the hacky multipage script, or get bikeshed to support the multipage. There are some specs that the multipage version is definitive, they don't quite pass the pubchecker but they get away with it.

JF Ignore the multipage thing. If we’re going to publish the bikeshed thing why would we point people anywhere else.

BN: Kinda what I’m getting at

AR: For reviewing the issues you'd find on bikeshed version are a superset of the other version so you should review that. Strictly speaking I don’t think there’s even a requirement that we link to the official document

BN: Sure but there is a substantial difference between those documents. There is a difference in ordering of the appendices. This is rooted in multipage, which just doesn't work on sphinx because of mathjax. [Enumerates some other technical issues]. There’s enough difference in plumbing that its not two views of the same doc.

AR: Is it searchable only in the single page version. Searching is not always the same as an index but I guess most people dont care.

BN: Sounds like you’re concerned about pointing people at the bikeshed version. What is the threshold? We’re not on a path that will get them looking exactly the same.

AR: Fix the outstanding math formatting issues

BN: Column, vertical ones are harder to fix, not pretty

BS: 4.5.3.8 there are issues where it all turns into one word. See where it says "where forall i"

BN: We’re not having people look at it, maybe the core folks are, im looking for the trigger event where we stop pointing folks at the sphinx version.

AR: When I edit spec text, I don’t want to deal with turn-around time of building the bikeshed version.

BN: one advantage of using bikeshed as the input format maybe the build time would get better (than bikeshed currently). If you load all the math on one page itll always be worse than mathjax. Its crashed my browser but it worked in Firefox

AR: I think anybody who does spec editing will want the other version for the time being

BN: Its slow on the first build but it rebuilds … after sphinx bikeshed is relatively quick itself, you pay for sphinx anyway … then it writes out things it believes has changed

AR: Turnaround time is already 10 times as much

BN: Maybe we can fix that in tooling but I’m ok with build time being the blocker.

JF: Can we move on to other stuff now?

BN: I am looking for guidance. It sounds like in terms of exit criteria I’m looking for something. Maybe we need to go back to the first poll and say we used the wrong tool

AR: As far as I can see, for the foreseeable future it’s mainly about switching the links - incident with switching external links, makes links stale

BN: Just to add to confusion, none of these is the canonical version according to the W3C, they have a link to the version they think is true. We should move on

Section Review

Intro Section

LZ: pretty short and sweet. Only one issue: in design goals we mention debuggability, but this is out of scope. The only thing we spec a text format which is a bare minimum for debugging. We might want to revisit this if we add crash reporting to spec.

AR: Anything we might specify about debugging will probably live in the embedder space or a separate one?

BN: Debugging through API has some non-standard APIs

LZ: We might want to put something different in here if we have something now in the future, but I’m fine with this for now.

LZ: nitpick about footnotes - number shows up in the middle of the paragraph rather than the front.

BN: General issue with tp directive not being properly supported

AR: footnote is not katex

BN: that could just be something goofy then

LZ: footnotes are like this throughout. Probably part of the tool

BN: Probably just something weird about the stylesheet

JF: oh, the stylesheet can’t be embedded?

LZ: nitpick #2 - We inconsistently introduce bulleted lists with either colon or dot. Do people care? No. Does it bother me? a little bit.

BN: I like colons let’s use them

BN: is that edge? I haven’t tried that yet

LZ: yeah

LW: DE posted a gist of remaining issues before v1 becomes a spec

BN: oh, let’s project

LW figures out projectors

LW presenting Dan’s jist to resolve issues before v1

LW: Should we figure out CSP for v1?

EH: I figured we’d sort it out later

BN: does anyone feel strongly about CSP before v1?

discussion about status of CSP

LW: Dan’s question - Are we just having things we accept are incompatible?

JF: Aren’t people who complain about CSP just people who use the Chrome web store?

EH: Chrome disallows wasm without unsafe-eval on the web. We have a workaround for Chrome apps

JF: have we heard anyone say this is a problem for me>

BN: We should fix it.. This one in the limit we will fix, just not now

LW: should we align on resource limits?

DE: are you going over the gist? I can’t hear most of you

Dan joined call, will be presenting

LW: we don’t think this should block v1, but we do want to agree (CSP)

DE: and resource limits?

LW: we’re discussing. What do you think?

DE: unclear who wants it or what the definition is, or what layer. Web spec? Core spec?

AR: I think it’s clear it belongs in the web spec

BT: Deserves a mention in the core spec, that some implementations force resource limits

AR: there’s a whole appendix about that

JF: maybe in the core spec we want a list

AR: there’s a list in the appendix.

DE: would the web spec have a number for every one of those?

AR: some of them you can’t have numbers for - there are many variables, dynamic conditions that influence this

JF: the state of things is that ben sent out limits.h and all the browsers picked what makes sense for us.

BT: one small difference in how limits are enforced for parameters

JF: There’s stuff on the list that did not make sense for us, so did not pick up

BT: let’s sync offline

LW: So we want this to go in the web or JS spec?

BT: I’d vote for web. We have a limitation about synchronous compile size.

JF: Node.js doesn’t care about that, so it’s ok to put in the js spec

DE: makes sense to me. concepts like main thread have meaning in web spec but not JS

BT: that sounds reasonable to me

AR: Why would they?

DE: there’s a lot of work going on right now with aligning node.js with web platform. Compatibility is useful. Our factoring should help with that.

BN: Have node folks say that they want consistency with browsers even if it costs them performance

LW: some limits are unknowable, like stack overflow. In the core spec we’ll say "sorry, there’s nothing we can do" but other limits will have a constant.

AR: Find it weird to over constrain JS implementations, also don’t really care

JF: do we want to do this for v1 or not?

BN: it’s a small enough amount of work.

LW: it’s observable, so we should fix it

JF: does it reset the timer?

BN: the test they prescribe is "would it change whether an implementation is compliant?"

JF: it doesn’t because right now we say you can enforce anything.

DE: if we put res. limits in web integration spec, those go in WPT and they don’t care about version

BN: tests don’t count against clock. They are considered an imperfect approximation of the spec. We’re only three months into 150 day clock. Let’s not worry about the clock. I cared about clock to lockdown outstanding issues. The only risk is patents on webassembly limits, so we’re pretty good.

JF: Do we want to do the limits for v1, if so who will do it?

BN: I’ll do it. PR against js api

JF: Let’s make sure we all do it, I’d like to see if before we break on tests

LW: before long we’ll have WPT to point out differences

DE: we have a few different standing areas of disagreement. JF mentioned other limits that v8 has but they decided don’t make sense. Do we want to specify a limit and define jsc to be not compliant?

JF: I put limits for everything I could enforce.

DE: what couldn’t you enforce but others could?

JF I don’t remember, let’s do a PR

DE: you’ll get more compatibility if everyone strictly enforces the limit.

JF: I’m not advocating for that. Some numbers don’t make sense for us, so I just don’t do anything for them

Action item: Ben Titzer to create a PR for limits.

DE: this might be a typo. postMesage or copy on memory just throws. that’s still true with threads, we can only do serialization but not transfer for shared memory. Are these the semantics we want?

LW: we talked about this last time and there was general agreement

JF: are we discussing whether we have concensus to have consensus to have consensus?

DE: i miswrote that we had consensus, but we didn’t want ot have serializable unless we had shared memory? Module serialization had lots of discussion. My impression was we wanted to restrict to same agent cluster. Since then, we’ve talked about same origin. Anne points out these are subltly different. He likes the way this was specified. We don’t support agent cluster when writing to indexed db. Do we have consensus on this? I have a PR out

BN: that sounds fine. I lost track of why, but I recall it making sense. I thought we had understanding at that meeting that there was a difference between site and agent cluster.

DE: the last thing was writing to indexed db. Do we want wasm modules to be serializable?

BN laughs

BN: I think that’s what we said, but there’s some desire to relitigate, but I don’t think that’s for you to sort out. Let the stakeholders argue.

JF: current status quo is you can serialize modules to indexed db

BN: we haven’t shipped it due to implementation issues.

JF: we haven’t either

BN: we’re trying to get it soon. Do you have a sense for when?

JF: We don’t think there’s a rush. Web cache should handle this

DE: that’s a design issue that we should get agreement on. If this is in v1, does that effect if you will work on it?

JF: a valid implementation can always fail to serialize

DE: this is a performance/reliability question. Indexed DB was a really core design choice, instead of relying on web cache.

BN: I’ve had a number of companies complain that Chrome hasn’t shipped this. Users care.

JF: users think they care

DE: maybe the companies are wrong. What about HTTPS only? I like that idea. Persistent vs non-persistent exploits, storing code. We don’t want to enable that on unsecure channels.

JF: I thought we wouldn’t do any secure context things until we had a full design instead of point solutions

DE: This is a well-understood vulnerability, what’s point about this?

LW: this is only about serialization

JF: it’s a new restriction not in the original design. I don’t dislike secure only, I dislike "oh let’s just do it here" without a better view of what’s https only

DE: we have several open issues. Is it worth specifying this in v1? Let’s work it out later

JF: stage 5

DE: I don’t want to put something in the spec that we don’t want people to ship.

JF: we’re rehashing. Indexeddb is allowed to fail

DE: the whole point is to be more reliable.

JF: that’s not what i’m talking about. Does edge ship indexed db?

MH: no

LW: it works thought, semantically

MH: semantically it works..

LW: it doesn’t throw

LZ: we don’t serialize

MH: we have our SCA library that works.. I guess.

BN: we have an implementation behind a flag, but we have concerns about leaving the database in a state where we’re unsure of the upgrade state.

LW: does this miss v1 then? is it a post v1 feature?

BN: it would give ammunition to those who want to relitigate this topic

DE: the ammunition is already there if someone doesn’t like it also wants to leave the spec in an unshippable state. I don’t know how to resolve this

BN: if we leave secure aspect unspecified would that be okay

JF: that’s what i proposed before. I want us to do this in a principled way. I don’t like opportunistic encryption. It’s a wider discussion. We said last time we’d talk about this with webappsec. Since then, TAG has had guidance on https: try to do it if it makes sense, user data, etc. It doesn’t say security vulnerabilities.

DE: I think this is consistent with tag

JF: I don’t

DE: we have specifically identified risk here

JF: they talked about geolocation. They didn’t walk about a specific exploit. We need a wider discussion about how to aply https to webassembly. This can be the poster child

DE: I’ll open an issue with the TAG

Action item: Dan to open issue with the TAG on secure context for WebAssembly, with IndexDB as one of the things we could force under secure context, but considering all the things that could be under secure context.

JF: that shouldn’t be "should indexed db be https" but “here is one thing we should discuss, among many”

BN: do we keep this in v1?

JF: this was never in v1

BN: no, i mean indexed db

JF: yes, that’s a separate question

BN: dan, you’re uncomfortable with leavin ghtis in an inconsistent state?

DE: I’m comfortable talking to TAG. I’m unfomforatble with browsers taking a different stance

BN: it sounds like we should block on CSP then

JF: we need to talk to webappsec

BN: discussion hasn’t happend yet. trying to front load. Potentially unbounded discussion. Trying to pin down value system. Google folks’ approach is "we know it when we see it." Disentangling is nice, but these are all mixed

DE: does it make sense to talk to TAG, or should we talk to webappsec

BN: our CSP folks are reluctant to say what we can and can’t do, but rather how to help

JF: we’re mixing issues. we need to stop mixing. HTTPS should be brought up to TAG in wider context.

DE: tag’s ruling is groups should make these decision on case by case basis, but others were saying everything should be secure context

JF: let’s move on. No more secure context. done. Next thing: indexed db in v1?

LW: should we resolve the first issue before v1 goes out the door?

JF: original decision did not have anything to do with secure context. This is a big change

BN: that would reset the clock

JF: I’d ignore secure context

BN: before we move on, are we comfortable with getting guidance, and potentially making a clock resetting change?

LW (or JF?): sure

JF: if we want indexed db in v1, sure

DE: I don’t see why we need to decide on indexed db before asking for guidance

JF: these are two issues and separate

DE: if we are refusing to make changes and get guidance we hsould….. they’re connected

JF: we are getting guidance

DE: We can only make this decision once we get guidance from TAG

JF: we’re forking a thread. ignore spec. get guidance. Separately, we’re trying to get v1. Does this thread join on TAG?

LW: let’s discuss it now. I think we should wait on tag.

JF: we’re getting guidance

LW: Okay. that determines what we do for indexed db

BN: aside from guidance, are there other issues?

DE: it would be nice if we had a shared understanding among implementations that idb will be supporting. We’re at a strange impasse, saying it’s okay because indexed db is fallable. That doesn’t sound like consensus

MH: we want to use webcache instead for edge. idb needs proactive caching, but we lazily compile. We’d want to cache the functions you compile

LW: does your webcache allow updating?

MH: not really. There’s a structured clone algorithm they call

LW: would webcache be better?

MH: we could onload of script context send off a job to cache it how we want

LW: if we were talking about adding functionality could that also go back and update?

MH: I have some concern about saving jitted code in general?

LW: so you want to remove idb support?

MH: if you want to serialize your module. we don’t do anything optimal. We just spit out the bytes fromt he webassembly module. Technically it works, but it’s not really anything optimal. Not an improvement over webcache for wasm module bytes vs indexed db copy of wasm bytes

LW: so is that an argument to remove the feature

MH: it’s not useful for us is all i’m saying, but I guess it’s useful to you, so i don’t care

BN: has anyone implemented more aggressive caching in browser cache around larger modules? We don’t cache anything large enough to be meaningful.

LW: we haven’t implemented wasm cache with browser http cache.

LW: it seems like we need to consider whether more browsers intend to ship this

BN: that matters. if we do a bunch of shenanigans to optimize this then maybe we don’t want to.

JF: they can still serialize the blob

LW: if they’re doing something explicitly they might as well get the full benefits.

BN: even if we remove the feature, we’re not dissuading people from stashing array of bytes. The users distrust the cache and save images

JF: sure, so put everything in idb since you don’t trust the cache

LW: maybe idb is not fully understood and we should kick it out of v1

BN: some people on our end want to see that die and webcache get better

DE: it sounds like webcache is more likely to be optimized across browser. If the point is to get something that is reliably optimized, maybe idb is not the right idea

LW: idb is not right in all circumstances. lots of low level failures

BN: safari: are you likely to keep a 40 mb cache around

JF: our code is position independent. It’s not clear that it’s a win

DE: there’s a persistent storage api that prevents eviction

BN: there’s a cache api that doesn’t say anything about code caching but can cache bytes, but can lead to stuff getting thrown away. not exactly predictable performance

JF: what about browser updates. we evict cache on browser upgrade

BN: one challenge with our impl is on update code will get regenerated but not stored to database so you have to re-push.

LW: we had to specifically add code for that case because otherwise on update forever the cache wouldn’t work

BN: we’ll probably have to do something eventually, but maybe we can give devs a workaround

LW: that sounds sketchy

BN: maybe it’s the wrong place

LW: or too early

BN: i’m a little sad but … JF, are you likely to do this?

JF: do you really think you’ll get an answer?

BN: alright. I guess the fact they haven’t done it already is an answer. JF, can you/fil even do this?

JF: there’s an open bug

BN: against jsc?

JF: yes. Doesn’t mean we’ll do it

BN: i could imagine some sites… this has the shape of a compat hazard, but it’s important

DE: maybe if apple can’t commit to shipping now, if chrome ships and it helps a lot they may decide to prioritize it. That could help get consensus.

BN: we can keep it

LW: that’d work. Chrome should ship it

BN: we’re working on it

15 minute break

discussion about Fetch and asynchronous instantiation

DE: that’s all the issues I had. Thanks for going through them.

JF: what’s next? Brad, do you want to do structure?

BN: that’ll be really quick. Derek filed all the issues.

Wouter: I have some notes. Most are "this doesn’t look clear, maybe there’s a better way"

JF: can you file issues?

Wouter: yes, I’ll make issues for whatever is unresolved. 1. types only have function types, maybe they should be function types

AR: in the future there will be more

Wouter: 2.5.9 are there limitations of the signature of the start function that are worth mentioning?

JF: it should say void->void

AR: validation rules catch this. Section 2 defines structure without going into details

Wouter: the spec repeats itself a lot

AR: we should do less of that

BN: Section 3

LW: I filed all my issues

Wouter: i have one issue. 3.3 - typing rules for unreachable…

JF: you don’t want to go there. Ben, how many hours have you spent on this?

BT: more than on testing

AR: rationale document explains this

BS: but not in the spec?

AR: yes, we decided to leave rationale out of the spec

JF: moving on to execution

BS: i just posted that, not much that needs to be discussed. Some clarification, some typos

Wouter: I have some issues. I’ll put them on the screen. Opening another can of worms...

collaborative effort to figure out how to zoom text

Wouter: using const instructions as values is confusing

AR: we’d have to redo the whole formalization.

Wouter: okay, not going to happen. 4.2.2 … something about traps

AR: we might generalize this to exceptions

Wouter: 4.2.4.3 seems too implementation-specific

more combing over fine details, when to use notes, etc.

Section 6

EH: Theorem sections, use resulttype section as one word - is it because it’s a syntactic object?

AR: Yes, which section? I’ll fix it if it’s a problem

EH: The way we talk about termination is strange - prose description "...", that’s in the explanatory spec, but the formal spec calls it out but the prose doesn’t. The rule for host functions - assumes that they terminate - does it matter?

AR: Trying to remember.. We might not cater to non-terminating host calls, if the host calls don’t terminate we don’t do anything. All this is going to change with threads anyway - it’s an interesting edge case

EH: Notation of prepending sections, seemed more like appending than prepending

AR: Luke commented on this as well. Some back and forth trying alternatives.. Ended up adding more explanatory notes

EH: Everything else has been fixed with the formatting Bras’s been doing

AR: Please file issues for all the unaddressed items

JF: British vs. American english inconsistency

AR: Tried to use American english, might have overlooked some cases

Discussion about committee mandates on what the right version of english is

Closure