Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable more pytests #107

Open
Devjiu opened this issue Aug 9, 2024 · 6 comments · Fixed by #106
Open

Enable more pytests #107

Devjiu opened this issue Aug 9, 2024 · 6 comments · Fixed by #106

Comments

@Devjiu
Copy link
Collaborator

Devjiu commented Aug 9, 2024

Enable the rest suits (where applicable) in pytests and add them into testing.

language:

  • test_line_info.py : Uses disassembly so requires objdump usage for cpu. (1 test)
  • test_reproducer.py : Cuda -only
  • test_subprocess.py : Requires device print (7 passed, 27 failed)

runtime:

  • test_autotuner : Need for CPU support in auto tune config (4 tests)
    python/triton/runtime/jit.py:327: in <lambda>                                                                                                                                                                           return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
    python/triton/runtime/autotuner.py:156: in run                                                                                                                                                                          timings = {config: self._bench(*args, config=config, **kwargs) for config in pruned_configs}
    python/triton/runtime/autotuner.py:156: in <dictcomp>                                                                                                                                                                   timings = {config: self._bench(*args, config=config, **kwargs) for config in pruned_configs}
    python/triton/runtime/autotuner.py:133: in _bench                                                                                                                                                                       return do_bench(kernel_call, warmup=self.num_warmups, rep=self.num_reps, quantiles=(0.5, 0.2, 0.8))
    python/triton/testing.py:133: in do_bench
      torch.cuda.synchronize()
    .venv/lib/python3.10/site-packages/torch/cuda/__init__.py:890: in synchronize
            _lazy_init()
    
  • test_cublas : Cuda - only (1 test)
  • test_subproc : Direct compiler calls (2 tests)

tools:

  • test_aot : Many direct compiler calls (5 tests)
  • test_disasm : Uses cubin, Cuda - only (1 test)

in root:

  • test_debug_dump : Can be passed, but uses unclear parameters
  • test_perf_warning : GPU - specific, Checks for MMA V3 usage.
@int3
Copy link
Collaborator

int3 commented Aug 13, 2024

test_autotuner.py will be easy to enable once triton-lang#4496 lands

and as mentioned over slack, rebasing on triton-lang#4334 will give us a lot more passing tests

@Devjiu
Copy link
Collaborator Author

Devjiu commented Aug 13, 2024

python/test/unit/language/test_annotations.py
python/test/unit/language/test_block_pointer.py
python/test/unit/language/test_conversions.py
python/test/unit/language/test_compile_errors.py
python/test/unit/language/test_decorator.py
python/test/unit/language/test_pipeliner.py
python/test/unit/language/test_random.py
python/test/unit/language/test_standard.py
python/test/unit/runtime/test_bindings.py
python/test/unit/runtime/test_driver.py
python/test/unit/runtime/test_jit.py
python/test/unit/runtime/test_launch.py
python/test/unit/cpu/test_libdevice.py
python/test/unit/cpu/test_libmvec.py
python/test/unit/cpu/test_opt.py

SKIPPED [15] python/test/unit/language/test_block_pointer.py:38: Padding with NaN is not supported for integer types
SKIPPED [2] python/test/unit/language/test_conversions.py:338: Conversion from float32 to float16 is not supported on CPU
SKIPPED [2] python/test/unit/language/test_conversions.py:338: Conversion from float32 to bfloat16 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b15 to float16 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b15 to float32 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b8 to float32 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b8 to float16 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:338: Conversion from float32 to float8e4b8 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:338: Conversion from bfloat16 to float8e4b8 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:338: Conversion from float16 to float8e4b8 is not supported on CPU

===================== 1477 passed, 26 skipped, 6 warnings in 905.81s (0:15:05) ======================

python/test/unit/language/test_core.py

SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float32', 'float8_e5m2') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('bfloat16', 'float8_e5m2') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e4m3fn', 'float16') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e4m3fn', 'float32') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e4m3fn', 'bfloat16') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e5m2', 'float16') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e5m2', 'float32') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e5m2', 'bfloat16') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float16', 'float8_e4m3fn') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float32', 'float8_e4m3fn') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('bfloat16', 'float8_e4m3fn') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float16', 'float8_e5m2') is not supported on CPU.
SKIPPED [56] python/test/unit/language/test_core.py:2341: Skipping linear_recurrence scan on bfloat16 due to accuracy issues
SKIPPED [6] python/test/unit/language/test_core.py:173: float8e4b15 not supported on CPU
SKIPPED [160] python/test/unit/language/test_core.py:3403: Test is skipped due to too long execution time on CPU

====================== 6508 passed, 246 skipped, 193 warnings in 422.45s (0:07:02) ======================

python/test/unit/runtime/test_cache.py

SKIPPED [1] python/test/unit/runtime/test_cache.py:441: Device Assert is not yet supported on CPU

====================== 24 passed, 1 skipped in 7.19s ======================

@Devjiu
Copy link
Collaborator Author

Devjiu commented Aug 13, 2024

Sort by issues:

Conversion issues (41 total) :

  SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b15 to float16 is not supported on CPU
  SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b15 to float32 is not supported on CPU
  SKIPPED [6] python/test/unit/language/test_core.py:173: float8e4b15 not supported on CPU

  SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b8 to float32 is not supported on CPU
  SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b8 to float16 is not supported on CPU
  SKIPPED [1] python/test/unit/language/test_conversions.py:338: Conversion from float32 to float8e4b8 is not supported on CPU
  SKIPPED [1] python/test/unit/language/test_conversions.py:338: Conversion from bfloat16 to float8e4b8 is not supported on CPU
  SKIPPED [1] python/test/unit/language/test_conversions.py:338: Conversion from float16 to float8e4b8 is not supported on CPU

  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e4m3fn', 'float16') is not supported on CPU.
  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e4m3fn', 'float32') is not supported on CPU.
  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e4m3fn', 'bfloat16') is not supported on CPU.
  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float16', 'float8_e4m3fn') is not supported on CPU.
  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float32', 'float8_e4m3fn') is not supported on CPU.
  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('bfloat16', 'float8_e4m3fn') is not supported on CPU.

  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float32', 'float8_e5m2') is not supported on CPU.
  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('bfloat16', 'float8_e5m2') is not supported on CPU.
  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e5m2', 'float16') is not supported on CPU.
  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e5m2', 'float32') is not supported on CPU.
  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float8_e5m2', 'bfloat16') is not supported on CPU.
  SKIPPED [2] python/test/unit/language/test_core.py:1652: test_cast('float16', 'float8_e5m2') is not supported on CPU.

  SKIPPED [2] python/test/unit/language/test_conversions.py:338: Conversion from float32 to bfloat16 is not supported on CPU

  SKIPPED [2] python/test/unit/language/test_conversions.py:338: Conversion from float32 to float16 is not supported on CPU

Padding with NaN (15 total):

  SKIPPED [15] python/test/unit/language/test_block_pointer.py:38: Padding with NaN is not supported for integer types

Bfloat16 accuracy (56 total):

  SKIPPED [56] python/test/unit/language/test_core.py:2341: Skipping linear_recurrence scan on bfloat16 due to accuracy issues

Too long execution(compilation) time (160 total):

  SKIPPED [160] python/test/unit/language/test_core.py:3403: Test is skipped due to too long execution time on CPU

Device assert:

  SKIPPED [1] python/test/unit/runtime/test_cache.py:441: Device Assert is not yet supported on CPU

Total amount of tests: (6508 + 246) + (1477 + 26) + (24 + 1) = 8282
Skipped: 246 + 26 + 1 = 273

Percent of skipped: 273 / 8282 = 3.29%

@Devjiu Devjiu linked a pull request Aug 13, 2024 that will close this issue
@Devjiu
Copy link
Collaborator Author

Devjiu commented Aug 14, 2024

Update after rebase.

python/test/unit/language/test_core.py

206.71s call     test/unit/language/test_core.py::test_scan2d[cummax-bfloat16-shape874-1-True-16]
203.62s call     test/unit/language/test_core.py::test_scan2d[cummax-bfloat16-shape370-1-True-4]
122.93s call     test/unit/language/test_core.py::test_dot[1-128-128-64-2-True-True-none-ieee-float16-float16-1_0]
122.54s call     test/unit/language/test_core.py::test_dot[1-64-128-128-4-False-True-none-ieee-float16-float16-1_1]
121.29s call     test/unit/language/test_core.py::test_dot[1-128-256-32-8-False-True-none-ieee-float16-float16-1_0]
120.90s call     test/unit/language/test_core.py::test_dot[1-128-256-32-8-False-False-none-ieee-float16-float16-1_1]
120.87s call     test/unit/language/test_core.py::test_dot[1-64-64-64-4-False-False-add-rows-ieee-float16-float16-1]
120.67s call     test/unit/language/test_core.py::test_dot[1-64-128-128-4-True-False-none-ieee-float16-float16-1_0]
119.06s call     test/unit/language/test_core.py::test_dot[1-128-128-64-4-False-True-none-ieee-float16-float16-1_0]
118.15s call     test/unit/language/test_core.py::test_dot[1-128-256-32-8-True-True-none-ieee-float16-float16-1_0]
========================================================================== short test summary info ==========================================================================SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('float8_e5m2', 'bfloat16') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('float16', 'float8_e4m3fn') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('float32', 'float8_e4m3fn') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('bfloat16', 'float8_e4m3fn') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('float16', 'float8_e5m2') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('float32', 'float8_e5m2') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('bfloat16', 'float8_e5m2') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('float8_e4m3fn', 'float16') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('float8_e4m3fn', 'float32') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('float8_e4m3fn', 'bfloat16') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('float8_e5m2', 'float16') is not supported on CPU.
SKIPPED [2] python/test/unit/language/test_core.py:1665: test_cast('float8_e5m2', 'float32') is not supported on CPU.
SKIPPED [56] python/test/unit/language/test_core.py:2384: Skipping linear_recurrence scan on bfloat16 due to accuracy issues
SKIPPED [161] python/test/unit/language/test_core.py:3453: Test is skipped due to too long execution time on CPU
SKIPPED [10] python/test/unit/language/test_core.py:185: float8e4b15 not supported on CPU
========== 6522 passed, 251 skipped, 175 warnings in 463.27s (0:07:43) ================

python/test/unit/language/test_annotations.py
python/test/unit/language/test_block_pointer.py
python/test/unit/language/test_conversions.py
python/test/unit/language/test_compile_errors.py
python/test/unit/language/test_decorator.py
python/test/unit/language/test_pipeliner.py
python/test/unit/language/test_random.py
python/test/unit/language/test_standard.py
python/test/unit/runtime/test_bindings.py
python/test/unit/runtime/test_driver.py
python/test/unit/runtime/test_jit.py
python/test/unit/runtime/test_launch.py
python/test/unit/runtime/test_subproc.py
python/test/unit/cpu/test_libdevice.py
python/test/unit/cpu/test_libmvec.py
python/test/unit/cpu/test_opt.py

155.60s call     test/unit/language/test_standard.py::test_sort[float16-True-128-8]
142.94s call     test/unit/language/test_standard.py::test_sort[float16-False-64-16]
131.62s call     test/unit/language/test_standard.py::test_sort[float32-True-128-8]
108.02s call     test/unit/language/test_standard.py::test_sort[int32-False-128-8]
106.85s call     test/unit/language/test_standard.py::test_sort[bfloat16-False-128-8]
85.99s call     test/unit/language/test_standard.py::test_sort[float32-True-64-16]
85.71s call     test/unit/language/test_standard.py::test_sort[float16-True-64-16]
81.79s call     test/unit/language/test_pipeliner.py::test_pipeline_matmul
76.64s call     test/unit/language/test_standard.py::test_sort[int32-True-128-8]
========================================================================== short test summary info ==========================================================================SKIPPED [15] python/test/unit/language/test_block_pointer.py:41: Padding with NaN is not supported for integer types
SKIPPED [2] python/test/unit/language/test_conversions.py:338: Conversion from float32 to float16 is not supported on CPU
SKIPPED [2] python/test/unit/language/test_conversions.py:338: Conversion from float32 to bfloat16 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b15 to float16 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b15 to float32 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b8 to float32 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:288: Conversion from float8e4b8 to float16 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:338: Conversion from bfloat16 to float8e4b8 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:338: Conversion from float32 to float8e4b8 is not supported on CPU
SKIPPED [1] python/test/unit/language/test_conversions.py:338: Conversion from float16 to float8e4b8 is not supported on CPU
============ 1513 passed, 26 skipped in 579.68s (0:09:39) =============

python/test/unit/language/test_line_info.py
python/test/unit/language/test_reproducer.py
python/test/unit/language/test_subprocess.py
python/test/unit/runtime/test_autotuner.py
python/test/unit/runtime/test_cublas.py
python/test/unit/runtime/test_subproc.py
python/test/unit/tools/test_aot.py
python/test/unit/tools/test_disasm.py
python/test/unit/test_debug_dump.py
python/test/unit/test_perf_warning.py

FAILED python/test/unit/test_perf_warning.py::test_mma_remark - AssertionError: expect MMA V3 remark

FAILED python/test/unit/tools/test_aot.py::test_compile_link_autotune_matmul - subprocess.CalledProcessError: Command '['/home/jovyan/triton-cpu/.venv/bin/python', '/home/jovyan/triton-cpu/python/triton/tools/compile.py', '-n', 'kernel', '--signature', '*fp32:16, *fp16:16, *fp16:16, i32, i32, i32, i32, i32:1, i32, i32:1, i32:16, i32:1, 16, 16, 16', '--out-name', 'matmul_fp16', '-o', 'matmul_fp16', '-w', '1', '-g', 'M/16, N/16, 1', '/tmp/tmp66m7fm7j/kernel.py']' returned non-zero exit status 1.
FAILED python/test/unit/tools/test_aot.py::test_compile_link_matmul_no_specialization - subprocess.CalledProcessError: Command '['/home/jovyan/triton-cpu/.venv/bin/python', '/home/jovyan/triton-cpu/python/triton/tools/compile.py', '-n', 'kernel', '--signature', '*fp32, *fp16, *fp16, i32, i32, i32, i32, i32, i32, i32, i32, i32, 16, 16, 16', '--out-name', 'matmul_fp16', '-o', 'matmul_fp16', '-w', '1', '-g', 'M/16, N/16, 1', '/tmp/tmpq3hmabzg/kernel.py']' returned non-zero exit status 1.
FAILED python/test/unit/tools/test_aot.py::test_compile_link_matmul - subprocess.CalledProcessError: Command '['/home/jovyan/triton-cpu/.venv/bin/python', '/home/jovyan/triton-cpu/python/triton/tools/compile.py', '-n', 'kernel', '--signature', '*fp32:16, *fp16:16, *fp16:16, i32, i32, i32, i32, i32:1, i32, i32:1, i32:16, i32:1, 16, 16, 16', '--out-name', 'matmul_fp16', '-o', 'matmul_fp16', '-w', '1', '-g', 'M/16, N/16, 1', '/tmp/tmpqd0gyy2t/kernel.py']' returned non-zero exit status 1.
FAILED python/test/unit/tools/test_aot.py::test_launcher_has_no_available_kernel - subprocess.CalledProcessError: Command '['/home/jovyan/triton-cpu/.venv/bin/python', '/home/jovyan/triton-cpu/python/triton/tools/compile.py', '-n', 'kernel', '--signature', '*fp32:16, *fp16:16, *fp16:16, i32, i32, i32, i32:1, i32:1, i32:1, i32:1, i32:16, i32:1, 16, 16, 16', '--out-name', 'matmul_fp16', '-o', 'matmul_fp16', '-w', '1', '-g', 'M/16, N/16, 1', '/tmp/tmp7e14zus6/kernel.py']' returned non-zero exit status 1.

FAILED python/test/unit/test_debug_dump.py::test_fn_dump - AssertionError: assert 'IR Dump Before' in ''

FAILED python/test/unit/runtime/test_autotuner.py::test_prune_configs[True] - RuntimeError: Found no NVIDIA driver on your system.
FAILED python/test/unit/runtime/test_autotuner.py::test_hooks - RuntimeError: Found no NVIDIA driver on your system.
FAILED python/test/unit/runtime/test_autotuner.py::test_prune_configs[False] - RuntimeError: Found no NVIDIA driver on your system.
FAILED python/test/unit/runtime/test_autotuner.py::test_kwargs[True] - RuntimeError: Found no NVIDIA driver on your system.
FAILED python/test/unit/runtime/test_autotuner.py::test_restore - RuntimeError: Found no NVIDIA driver on your system.
FAILED python/test/unit/runtime/test_autotuner.py::test_kwargs[False] - RuntimeError: Found no NVIDIA driver on your system.

FAILED python/test/unit/language/test_subprocess.py::test_assert_nested[true-true] - assert 0 == (128 - 1)
FAILED python/test/unit/language/test_subprocess.py::test_assert[assert] - assert 0 == (128 - 1)
FAILED python/test/unit/language/test_subprocess.py::test_assert[no_debug] - assert 0 == (128 - 1)
FAILED python/test/unit/language/test_subprocess.py::test_assert[device_assert] - assert 0 == (128 - 1)
FAILED python/test/unit/language/test_subprocess.py::test_assert_nested[none-true] - assert 0 == (128 - 1)
FAILED python/test/unit/language/test_subprocess.py::test_assert_nested[true-none] - assert 0 == (128 - 1)
FAILED python/test/unit/language/test_subprocess.py::test_assert_nested[false-true] - assert 0 == (128 - 1)
FAILED python/test/unit/language/test_subprocess.py::test_assert[double_assert] - assert 0 == (128 - 1)

FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_scalar-float16] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_hex-int64] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_scalar-uint8] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_scalar-int32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print-float64] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_hex-int16] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_scalar-float64] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_negative-int32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print-int32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print-float16] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print-uint8] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[static_print-int32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[print_no_arg-int32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print-int8] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[print_multiple_args-int32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_scalar-float32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_pointer-int32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_large-int32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_scalar-long] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[print-int32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_scalar-int8] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_multiple_args-int32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print-long] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print-int16] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_uint-uint32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_hex-int32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print-float32] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[device_print_scalar-int16] - assert 1 == 0
FAILED python/test/unit/language/test_subprocess.py::test_print[no_arg_print-int32] - assert 1 == 0

SKIPPED [6] python/test/unit/language/test_line_info.py:197: interpreter is not enabled
SKIPPED [6] python/test/unit/language/test_line_info.py:152: disassembler is not available
SKIPPED [1] python/test/unit/language/test_reproducer.py:16: requires cuda
SKIPPED [8] python/test/unit/runtime/test_cublas.py:21: test_cublas is only supported on CUDA
SKIPPED [1] python/test/unit/tools/test_disasm.py:11: Test requires CUDA.
======= 48 failed, 12 passed, 22 skipped in 30.14s =========

Total (6522 + 251) + (1513 + 26) + (48 + 12 + 22) = 8394

Failed or skipped 251 + 26 + 48 + 22 = 347

Failed to Total Ratio = 4.13 %

@Devjiu
Copy link
Collaborator Author

Devjiu commented Aug 14, 2024

Reason Number of affected Test comment
slow dot3d compilation 161 test_core::test_dot3d
bfloat16 due to accuracy issues 56 test_core::test_reduce
conversions 45 = 2+2+2+2+2+2+2+2+2+2+2+2+10+2+2+1+1+1+1+1+1+1 test_core::test_cast, float8e4b15, test_conversions::test_typeconvert_downcast
device print 29 test_subprocess::test_print
Missing compiler/objdump/lib 16 = 4 + 6 + 6 test_aot, test_line_info
Padding with NaN for integer 15 test_block_pointer::test_block_copy expected skip, for all arch
Requires cuda 10 = 1 + 8 + 1 test_disasm, test_cublas, test_reproducer
device assert 8 test_subprocess::test_assert, test_subprocess::test_assert_nested
autotuner dont support cpu 6 test_autotuner
mlir enable dump dont work 1 test_debug_dump::test_fn_dump
MMA v3 not used 1 test_perf_warning::test_mma_remark

@Devjiu
Copy link
Collaborator Author

Devjiu commented Aug 14, 2024

Total pass rate without cpu only suit:

Total (without cpu): (6522 + 251) + (356 + 27) + (48 + 12 + 22) - 15 = 7223

Failed or skipped 251 + 26 + 48 + 22 - 15 = 347 

Ratio is: 332/7223 = 4.59%

We removing 15 as it skips for all architectures, NaN integers case skips in any case.

IF use PR #117:

Total (without cpu): (6522 + (120 + 41 + 90)) + (356 + 27) + (48 + 12 + 22) - 15 = 7223

Failed or skipped 41 + 90 + 26 + 48 + 22 - 15 = 212

Ratio is: 212/7223 = 2.94%

@Devjiu Devjiu reopened this Aug 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants