Skip to content

KittyCAD/modeling-app

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Zoo Modeling App

Zoo Modeling App

download at zoo.dev/modeling-app/download

A CAD application from the future, brought to you by the Zoo team.

Modeling App is our take on what a modern modelling experience can be. It is applying several lessons learned in the decades since most major CAD tools came into existence:

  • All artifacts—including parts and assemblies—should be represented as human-readable code. At the end of the day, your CAD project should be "plain text"
    • This makes version control—which is a solved problem in software engineering—trivial for CAD
  • All GUI (or point-and-click) interactions should be actions performed on this code representation under the hood
    • This unlocks a hybrid approach to modeling. Whether you point-and-click as you always have or you write your own KCL code, you are performing the same action in Modeling App
  • Everything graphics has to be built for the GPU
    • Most CAD applications have had to retrofit support for GPUs, but our geometry engine is made for GPUs (primarily Nvidia's Vulkan), getting the order of magnitude rendering performance boost with it
  • Make the resource-intensive pieces of an application auto-scaling
    • One of the bottlenecks of today's hardware design tools is that they all rely on the local machine's resources to do the hardest parts, which include geometry rendering and analysis. Our geometry engine parallelizes rendering and just sends video frames back to the app (seriously, inspect source, it's just a <video> element), and our API will offload analysis as we build it in

We are excited about what a small team of people could build in a short time with our API. We welcome you to try our API, build your own applications, or contribute to ours!

Modeling App is a hybrid user interface for CAD modeling. You can point-and-click to design parts (and soon assemblies), but everything you make is really just kcl code under the hood. All of your CAD models can be checked into source control such as GitHub and responsibly versioned, rolled back, and more.

The 3D view in Modeling App is just a video stream from our hosted geometry engine. The app sends new modeling commands to the engine via WebSockets, which returns back video frames of the view within the engine.

Tools

Original demo video

Original demo slides

Get started

We recommend downloading the latest application binary from our Releases page. If you don't see your platform or architecture supported there, please file an issue.

Running a development build

First, install Rust via rustup. This project uses a lot of Rust compiled to WASM within it. We always use the latest stable version of Rust, so you may need to run rustup update stable. Then, run:

yarn install

followed by:

yarn build:wasm

or if you have the gh cli installed

./get-latest-wasm-bundle.sh # this will download the latest main wasm bundle

That will build the WASM binary and put in the public dir (though gitignored).

Finally, to run the web app only, run:

yarn start

If you're not an KittyCAD employee you won't be able to access the dev environment, you should copy everything from .env.production to .env.development to make it point to production instead, then when you navigate to localhost:3000 the easiest way to sign in is to paste localStorage.setItem('TOKEN_PERSIST_KEY', "your-token-from-https://zoo.dev/account/api-tokens") replacing the with a real token from https://zoo.dev/account/api-tokens of course, then navigate to localhost:3000 again. Note that navigating to localhost:3000/signin removes your token so you will need to set the token again.

Development environment variables

The Copilot LSP plugin in the editor requires a Zoo API token to run. In production, we authenticate this with a token via cookie in the browser and device auth token in the desktop environment, but this token is inaccessible in the dev browser version because the cookie is considered "cross-site" (from localhost to dev.zoo.dev). There is an optional environment variable called VITE_KC_DEV_TOKEN that you can populate with a dev token in a .env.development.local file to not check it into Git, which will use that token instead of other methods for the LSP service.

Developing in Chrome

Chrome is in the process of rolling out a new default which blocks Third-Party Cookies. If you're having trouble logging into the modeling-app, you may need to enable third-party cookies. You can enable third-party cookies by clicking on the eye with a slash through it in the URL bar, and clicking on "Enable Third-Party Cookies".

Desktop

To spin up the desktop app, yarn install and yarn build:wasm need to have been done before hand then

yarn tron:start

This will start the application and hot-reload on changes.

Devtools can be opened with the usual Cmd/Ctrl-Shift-I.

To build, run yarn tron:package.

Checking out commits / Bisecting

Which commands from setup are one off vs need to be run every time?

The following will need to be run when checking out a new commit and guarantees the build is not stale:

yarn install
yarn build:wasm-dev # or yarn build:wasm for slower but more production-like build
yarn start # or yarn build:local && yarn serve for slower but more production-like build

Before submitting a PR

Before you submit a contribution PR to this repo, please ensure that:

  • There is a corresponding issue for the changes you want to make, so that discussion of approach can be had before work begins.
  • You have separated out refactoring commits from feature commits as much as possible
  • You have run all of the following commands locally:
    • yarn fmt
    • yarn tsc
    • yarn test
    • Here they are all together: yarn fmt && yarn tsc && yarn test

Release a new version

1. Bump the versions by running ./make-release.sh

The ./make-release.sh script has git commands to pull main but to be sure you can run the following git commands to have a fresh main locally.

git branch -D main
git checkout main
git pull origin
./make-release.sh
# Copy within the back ticks and paste the stdout of the change log
git push --set-upstream origin <branch name created from ./make-release.sh>

That will create the branch with the updated json files for you:

  • run ./make-release.sh or ./make-release.sh patch for a patch update;
  • run ./make-release.sh minor for minor; or
  • run ./make-release.sh major for major.

After it runs you should just need the push the branch and open a PR.

2. Create a Cut Release PR

When you open the PR copy the change log from the output of the ./make-release.sh script into the description of the PR.

Important: Pull request title needs to be prefixed with Cut release v to build in release mode and a few other things to test in the best context possible, the intent would be for instance to have Cut release v1.2.3 for the v1.2.3 release candidate.

The PR may then serve as a place to discuss the human-readable changelog and extra QA. The make-release.sh tool suggests a changelog for you too to be used as PR description, just make sure to delete lines that are not user facing.

3. Manually test artifacts from the Cut Release PR

Release builds

The release builds can be found under the out-{platform} zip, at the very bottom of the build-publish-apps summary page for each commit on this branch.

Manually test against this list across Windows, MacOS, Linux and posting results as comments in the Cut Release PR.

Updater-test builds

The other build-publish-apps output in Cut Release PRs is updater-test-{platform}. As we don't have a way to test this fully automatically, we have a semi-automated process. For macOS, Windows, and Linux, download the corresponding updater-test artifact file, install the app, run it, expect an updater prompt to a dummy v0.255.255, install it and check that the app comes back at that version.

The only difference with these builds is that they point to a different update location on the release bucket, with this dummy v0.255.255 always available. This helps ensuring that the version we release will be able to update to the next one available.

If the prompt doesn't show up, start the app in command line to grab the electron-updater logs. This is likely an issue with the current build that needs addressing (or the updater-test location in the storage bucket).

# Windows (PowerShell)
& 'C:\Program Files\Zoo Modeling App\Zoo Modeling App.exe'

# macOS
/Applications/Zoo\ Modeling\ App.app/Contents/MacOS/Zoo\ Modeling\ App

# Linux
./Zoo Modeling App-{version}-{arch}-linux.AppImage

4. Merge the Cut Release PR

This will kick the create-release action, that creates a Draft release out of this Cut Release PR merge after less than a minute, with the new version as title and Cut Release PR as description.

5. Publish the release

Head over to https://github.com/KittyCAD/modeling-app/releases, the draft release corresponding to the merged Cut Release PR should show up at the top as Draft. Click on it, verify the content, and hit Publish.

6. Profit

A new Action kicks in at https://github.com/KittyCAD/modeling-app/actions, which can be found under release event filter.

Fuzzing the parser

Make sure you install cargo fuzz:

$ cargo install cargo-fuzz
$ cd src/wasm-lib/kcl

# list the fuzz targets
$ cargo fuzz list

# run the parser fuzzer
$ cargo +nightly fuzz run parser

For more information on fuzzing you can check out this guide.

Tests

Playwright tests

You will need a ./e2e/playwright/playwright-secrets.env file:

$ touch ./e2e/playwright/playwright-secrets.env
$ cat ./e2e/playwright/playwright-secrets.env
token=<dev.zoo.dev/account/api-tokens>
snapshottoken=<your-snapshot-token>

For a portable way to run Playwright you'll need Docker.

Generic example

After that, open a terminal and run:

docker run --network host  --rm --init -it playwright/chrome:playwright-x.xx.x

and in another terminal, run:

PW_TEST_CONNECT_WS_ENDPOINT=ws://127.0.0.1:4444/ yarn playwright test --project="Google Chrome" <test suite>

Specific example

open a terminal and run:

docker run --network host  --rm --init -it playwright/chrome:playwright-1.46.0

and in another terminal, run:

PW_TEST_CONNECT_WS_ENDPOINT=ws://127.0.0.1:4444/ yarn playwright test --project="Google Chrome" e2e/playwright/command-bar-tests.spec.ts

run a specific test change the test from test('... to test.only('... (note if you commit this, the tests will instantly fail without running any of the tests)

Gotcha: running the docker container with a mismatched image against your ./node_modules/playwright will cause a failure. Make sure the versions are matched and up to date.

run headed

yarn playwright test --headed

run with step through debugger

PWDEBUG=1 yarn playwright test

However, if you want a debugger I recommend using VSCode and the playwright extension, as the above command is a cruder debugger that steps into every function call which is annoying. With the extension you can set a breakpoint after waitForDefaultPlanesVisibilityChange in order to skip app loading, then the vscode debugger's "step over" is much better for being able to stay at the right level of abstraction as you debug the code.

If you want to limit to a single browser use --project="webkit" or firefox, Google Chrome Or comment out browsers in playwright.config.ts.

note chromium has encoder compat issues which is why were testing against the branded 'Google Chrome'

You may consider using the VSCode extension, it's useful for running individual threads, but some some reason the "record a test" is locked to chromium with we can't use. A work around is to us the CI yarn playwright codegen -b wk --load-storage ./store localhost:3000

Where ./store should look like this

{
  "cookies": [],
  "origins": [
    {
      "origin": "http://localhost:3000",
      "localStorage": [
        {
          "name": "store",
          "value": "{\"state\":{\"openPanes\":[\"code\"]},\"version\":0}"
        },
        {
          "name": "persistCode",
          "value": ""
        },
        {
          "name": "TOKEN_PERSIST_KEY",
          "value": "your-token"
        }
      ]
    }
  ]
}

However because much of our tests involve clicking in the stream at specific locations, it's code-gen looks await page.locator('video').click(); when really we need to use a pixel coord, so I think it's of limited use.

Unit and component tests

If you already haven't, run the following:

yarn
yarn build:wasm
yarn start

and finally:

yarn test:unit

For individual testing:

yarn test abstractSyntaxTree -t "unexpected closed curly brace" --silent=false

Which will run our suite of Vitest unit and React Testing Library E2E tests, in interactive mode by default.

Rust tests

cd src/wasm-lib
KITTYCAD_API_TOKEN=XXX cargo test -- --test-threads=1

Where XXX is an API token from the production engine (NOT the dev environment).

We recommend using nextest to run the Rust tests (its faster and is used in CI). Once installed, run the tests using

cd src/wasm-lib
KITTYCAD_API_TOKEN=XXX cargo run nextest

Mapping CI CD jobs to local commands

When you see the CI CD fail on jobs you may wonder three things

  • Do I have a bug in my code?
  • Is the test flaky?
  • Is there a bug in main?

To answer these questions the following commands will give you confidence to locate the issue.

Static Analysis

Part of the CI CD pipeline performs static analysis on the code. Use the following commands to mimic the CI CD jobs.

The following set of commands should get us closer to one and done commands to instantly retest issues.

yarn test-setup

Gotcha, are packages up to date and is the wasm built?

yarn tsc
yarn fmt-check
yarn lint
yarn test:unit:local

Gotcha: Our unit tests have integration tests in them. You need to run a localhost server to run the unit tests.

E2E Tests

Playwright Browser

These E2E tests run in a browser (without electron). There are tests that are skipped if they are ran in a windows OS or Linux OS. We can use playwright tags to implement test skipping.

Breaking down the command yarn test:playwright:browser:chrome:windows

  • The application is playwright
  • The runtime is a browser
  • The specific browser is chrome
  • The test should run in a windows environment. It will skip tests that are broken or flaky in the windows OS.
yarn test:playwright:browser:chrome
yarn test:playwright:browser:chrome:windows
yarn test:playwright:browser:chrome:ubuntu

Playwright Electron

These E2E tests run in electron. There are tests that are skipped if they are ran in a windows, linux, or macos environment. We can use playwright tags to implement test skipping.

yarn test:playwright:electron:local
yarn test:playwright:electron:windows:local
yarn test:playwright:electron:macos:local
yarn test:playwright:electron:ubuntu:local

Why does it say local? The CI CD commands that run in the pipeline cannot be ran locally. A single command will not properly setup the testing environment locally.

Some notes on CI

The tests are broken into snapshot tests and non-snapshot tests, and they run in that order, they automatically commit new snap shots, so if you see an image commit check it was an intended change. If we have non-determinism in the snapshots such that they are always committing new images, hopefully this annoyance makes us fix them asap, if you notice this happening let Kurt know. But for the odd occasion git reset --hard HEAD~ && git push -f is your friend.

How to interpret failing playwright tests? If your tests fail, click through to the action and see that the tests failed on a line that includes await page.getByTestId('loading').waitFor({ state: 'detached' }), this means the test fail because the stream never started. It's you choice if you want to re-run the test, or ignore the failure.

We run on ubuntu and macos, because safari doesn't work on linux because of the dreaded "no RTCPeerConnection variable" error. But linux runs first and then macos for the same reason that we limit the number of parallel tests to 1 because we limit stream connections per user, so tests would start failing we if let them run together.

If something fails on CI you can download the artifact, unzip it and then open playwright-report/data/<UUID>.zip with https://trace.playwright.dev/ to see what happened.

Getting started writing a playwright test in our app

Besides following the instructions above and using the playwright docs, our app is weird because of the whole stream thing, which means our testing is weird. Because we've just figured out this stuff and therefore docs might go stale quick here's a 15min vid/tutorial

Screen.Recording.2023-11-21.at.11.37.07.am.mp4
PS: for the debug panel, the following JSON is useful for snapping the camera
{"type":"modeling_cmd_req","cmd_id":"054e5472-e5e9-4071-92d7-1ce3bac61956","cmd":{"type":"default_camera_look_at","center":{"x":15,"y":0,"z":0},"up":{"x":0,"y":0,"z":1},"vantage":{"x":30,"y":30,"z":30}}}

KCL

For how to contribute to KCL, see our KCL README.