Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: python/mypy
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v1.12.1
Choose a base ref
...
head repository: python/mypy
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: v1.13.0
Choose a head ref
  • 12 commits
  • 15 files changed
  • 2 contributors

Commits on Oct 20, 2024

  1. Configuration menu
    Copy the full SHA
    b8429f4 View commit details
    Browse the repository at this point in the history
  2. Significantly speed up file handling error paths (#17920)

    This can have a huge overall impact on mypy performance when search paths are long
    hauntsaninja committed Oct 20, 2024
    Configuration menu
    Copy the full SHA
    2416dbf View commit details
    Browse the repository at this point in the history
  3. Use sha1 for hashing (#17953)

    This is a pretty small win, it's below the noise floor on
    macrobenchmark, but if you time the hashing specifically it saves about
    100ms (0.5%) on `python -m mypy -c 'import torch' --no-incremental`.
    blake2b is slower
    hauntsaninja committed Oct 20, 2024
    Configuration menu
    Copy the full SHA
    159974c View commit details
    Browse the repository at this point in the history
  4. Let mypyc optimise os.path.join (#17949)

    See #17948
    
    There's one call site which has varargs that I leave as os.path.join, it
    doesn't show up on my profile. I do see the `endswith` on the profile,
    we could try `path[-1] == '/'` instead (could save a few dozen
    milliseconds)
    
    In my work environment, this is about a 10% speedup:
    ```
    λ hyperfine -w 1 -M 3 '/tmp/mypy_primer/timer_mypy_6eddd3ab1/venv/bin/mypy  -c "import torch" --no-incremental --python-executable /opt/oai/bin/python'
    Benchmark 1: /tmp/mypy_primer/timer_mypy_6eddd3ab1/venv/bin/mypy  -c "import torch" --no-incremental --python-executable /opt/oai/bin/python
      Time (mean ± σ):     30.842 s ±  0.119 s    [User: 26.383 s, System: 4.396 s]
      Range (min … max):   30.706 s … 30.927 s    3 runs
    ```
    Compared to:
    ```
    λ hyperfine -w 1 -M 3 '/tmp/mypy_primer/timer_mypy_88ae62b4a/venv/bin/mypy  -c "import torch" --no-incremental --python-executable /opt/oai/bin/python'
    Benchmark 1: /tmp/mypy_primer/timer_mypy_88ae62b4a/venv/bin/mypy  -c "import torch" --no-incremental --python-executable /opt/oai/bin/python
      Time (mean ± σ):     34.161 s ±  0.163 s    [User: 29.818 s, System: 4.289 s]
      Range (min … max):   34.013 s … 34.336 s    3 runs
    ```
    
    In the toy "long" environment mentioned in the issue, this is about a 7%
    speedup:
    ```
    λ hyperfine -w 1 -M 3 '/tmp/mypy_primer/timer_mypy_6eddd3ab1/venv/bin/mypy  -c "import torch" --no-incremental --python-executable long/bin/python'
    Benchmark 1: /tmp/mypy_primer/timer_mypy_6eddd3ab1/venv/bin/mypy  -c "import torch" --no-incremental --python-executable long/bin/python
      Time (mean ± σ):     23.177 s ±  0.317 s    [User: 20.265 s, System: 2.873 s]
      Range (min … max):   22.815 s … 23.407 s    3 runs
    ```
    Compared to:
    ```
    λ hyperfine -w 1 -M 3 '/tmp/mypy_primer/timer_mypy_88ae62b4a/venv/bin/mypy -c "import torch" --python-executable=long/bin/python --no-incremental'
    Benchmark 1: /tmp/mypy_primer/timer_mypy_88ae62b4a/venv/bin/mypy -c "import torch" --python-executable=long/bin/python --no-incremental
      Time (mean ± σ):     24.838 s ±  0.237 s    [User: 22.038 s, System: 2.750 s]
      Range (min … max):   24.598 s … 25.073 s    3 runs
    ```
    
    In the "clean" environment, this is a 1% speedup, but below the noise
    floor.
    hauntsaninja committed Oct 20, 2024
    Configuration menu
    Copy the full SHA
    e20aaee View commit details
    Browse the repository at this point in the history
  5. Use fast path in modulefinder more often (#17950)

    See #17948
    
    This is about 1.06x faster on `mypy -c 'import torch'` (in both the
    clean and openai environments)
    - 19.094 -> 17.896 
    - 34.161 -> 32.214
    
    ```
    λ hyperfine -w 1 -M 3 '/tmp/mypy_primer/timer_mypy_36738b392/venv/bin/mypy  -c "import torch" --no-incremental --python-executable clean/bin/python'
    Benchmark 1: /tmp/mypy_primer/timer_mypy_36738b392/venv/bin/mypy  -c "import torch" --no-incremental --python-executable clean/bin/python
      Time (mean ± σ):     17.896 s ±  0.130 s    [User: 16.472 s, System: 1.408 s]
      Range (min … max):   17.757 s … 18.014 s    3 runs
    
     λ hyperfine -w 1 -M 3 '/tmp/mypy_primer/timer_mypy_36738b392/venv/bin/mypy  -c "import torch" --no-incremental --python-executable /opt/oai/bin/python' 
    Benchmark 1: /tmp/mypy_primer/timer_mypy_36738b392/venv/bin/mypy  -c "import torch" --no-incremental --python-executable /opt/oai/bin/python
      Time (mean ± σ):     32.214 s ±  0.106 s    [User: 29.468 s, System: 2.722 s]
      Range (min … max):   32.098 s … 32.305 s    3 runs
    ```
    hauntsaninja committed Oct 20, 2024
    Configuration menu
    Copy the full SHA
    2cd2406 View commit details
    Browse the repository at this point in the history
  6. Use orjson instead of json, when available (#17955)

    For `mypy -c 'import torch'`, the cache load time goes from 0.44s to
    0.25s as measured by manager's data_json_load_time. If I time dump times
    specifically, I see a saving of 0.65s to 0.07s. Overall, a pretty
    reasonable perf win -- should we make it a required dependency?
    
    See also #3456
    hauntsaninja committed Oct 20, 2024
    Configuration menu
    Copy the full SHA
    7c27808 View commit details
    Browse the repository at this point in the history
  7. Speed up stubs suggestions (#17965)

    See #17948
    This is starting to show up on profiles
    
    - 1.01x faster on clean (below noise)
    - 1.02x faster on long
    - 1.02x faster on openai
    - 1.01x faster on openai incremental
    
    I had a dumb bug that was preventing the optimisation for a while, I'll
    see if I can make it even faster. Currently it's a small improvement
    
    We could also get rid of the legacy stuff in mypy 2.0
    hauntsaninja committed Oct 20, 2024
    Configuration menu
    Copy the full SHA
    50aa4ca View commit details
    Browse the repository at this point in the history
  8. Make is_sub_path faster (#17962)

    See #17948
    
    - 1.01x faster on clean
    - 1.06x faster on long
    - 1.04x faster on openai
    - 1.26x faster on openai incremental
    hauntsaninja committed Oct 20, 2024
    Configuration menu
    Copy the full SHA
    854ad18 View commit details
    Browse the repository at this point in the history
  9. Add faster-cache extra, test in CI (#17978)

    Follow up to #17955
    hauntsaninja committed Oct 20, 2024
    Configuration menu
    Copy the full SHA
    5c4d2db View commit details
    Browse the repository at this point in the history

Commits on Oct 21, 2024

  1. 2 Configuration menu
    Copy the full SHA
    bc0386b View commit details
    Browse the repository at this point in the history

Commits on Oct 22, 2024

  1. Configuration menu
    Copy the full SHA
    2eeb588 View commit details
    Browse the repository at this point in the history
  2. Bump version to 1.13.0

    hauntsaninja committed Oct 22, 2024
    Configuration menu
    Copy the full SHA
    eb31034 View commit details
    Browse the repository at this point in the history
Loading