Skip to content

Commit

Permalink
Add S3 and SIEVE, make S3 the default, remove clearing and locking
Browse files Browse the repository at this point in the history
Closes #143

Memory Shavings
===============

It was the plan all along, but since I worried about the overhead of
caches it made no sense to keep the result objects (which would
compose the cache entries) as dict-instances, so they've been
converted to `__slots__` (manually since `dataclasses` only supports
slots from 3.10).

Sadly this requires adding explicit `__init__` to every dataclass
involved as default values are not compatible with `__slots__`.

Cache Policies
==============

S3Fifo as default
-----------------

Testing on the sample file taught me what cache people have clearly
known for a while: lru is *awful*. You can do worse, but it takes
surprisingly little to be competitive with it.

S3Fifo turns out to have pretty good performances while being
relatively simple. S3 is not perfect, notably like most CLOCK-type
algorithm its eviction is O(n) which might be a bit of an issue in
some cases. But until someone complains...

As a result, S3 is now the cache policy for the basic cache (if `re2`
is not available) replacing LRU, and it's also exported as `Cache`
from the package root.

From an implementation perspective, the original exploratory version
(of this and most FIFOs tested) used an ordered dict as an indexed
fifo but the memory consumption is not great, the final version uses a
single index dict and separate deques for the FIFOs, an idea found in
@cmcaine's s3fifo which significantly compacts memory requirements
(though it's still a good 50% higher than a SIEVE or OD-based LRU of
the same size).

LFU
---

Matani et al's O(1) LFU had a great showing on hitrates and perfs
(though slightly worse than s3 still), however the implementation
still required the addition of some form of aging, which was not worth
it. Theoretically a straight LFU could work for offline use but...
that's a pretty pointles use as in that case you can just parse each
unique value once and splat by the entry count.

W-TinyLFU is the big modern cheese in the field, but I opted to avoid
it for now: it's a lot more complicated than the existing caches
(requiring a bloom filter, a frequency sketch or counting bloom
filter, an SLRU, and an LRU), plus a good implementation clearly
requires a lot of bit twiddling (for the bloom filters / frequency
sketch), which Python is not great at from a performance point of view
(I tried implementing CLOCK using a bytearray for bitmap and it was
crap).

SIEVE
-----

SIEVE is consistently a few percentage point below S3, and it's
lacking a few properties (e.g. scan resistance), however it does have
one interesting property which S3 lacks: at small cache sizes it has
less memory overhead than LRU, despite Python-level linked list and
nodes where LRU gets to use the native-coded OrderedDict, with a
C-level linked list and a bespoke secondary hashmap. And it does that
with the hitrates of an LRU double the size until we get to caches a
significant fraction the size of uniques (5000). It also features a
truly thread-safe unsynchronized cache hit.

Note: while the reference paper uses a doubly linked list, this
implementation uses a singly linked list for the sieve hand. This
means the hand is a pair of pointers but it saves 11% memory on the
nodes (72 -> 64 bytes), which gets significant as the size of the
cache increases.

Other Caches
------------

A number of simple cache implementations were temporarily
~~embarassed~~ implemented for testing:

- random
- fifo
- lp-fifo / fifo-reinsertion
- CLOCK (0 to 2), which is a different implementation of the same
  algorithm, tried a bitmap, it was horrible, an array of counters was
  competitive with lp-fifo using an ordereddict (perf-wise, I had yet
  to start looking at memory use).
- QD-LP-FIFO which is not *really* an algorithm but was an
  intermediate stations to S3 (the addition of a fixed-size
  probationary fifo and a ghost cache to an LP-FIFO, S3 is basically a
  more advanced and flexible version)

The trivial caches (RR, fifo) were worse than LRU but very simple, the
others were better than LRU but at the end of the day didn't really
pull their weight compared to alternatives (even if they were easy to
implement).

An interesting note here is that the quick-demotion scheme of S3 can
be put in front of LRU to some success (it does improve hit rates
significantly as the sample trace has a large number of one hit
wonders), but without excellent reasons to use an LRU on the back end
it doesn't seem super useful.

Thread Safety
=============

The `Locking` wrapper has been removed, probably for ever: testing
showed that the perf hit of a lock in GILPython was basically nil (at
least for the amount of work ua-python has to do, on uncontended
locks). Since none of the caches are intrinsically safe anymore (and
the clearing cache's lack of performance was a lot worse than any
synchronisation could be) it's better to just have synchronised
caches.

Thread-local cache support has however been added in case, and will be
documented, in case it turns out to be of use to the !gil mode (it
basically trades memory and / or hitrate for lower contention).

s3fifo implementation notes
===========================

The initial implementation of S3Fifo was done using ordered dicts as
indexed fifos, this was easy but after adding some memory tracking it
turns out to have a lot of overhead, at around 250% the overhead of
Lru (which makes sense, it needs 2 ordered dicts of about the same
size, plus a smaller ordered dict, plus entry objects to track
frequency).

An implementation based on deques is a lot more reasonable, it only
needs a single dict and CPython's deques are implemented as unrolled
linked lists of order 64 (so each link of the list stores 64
elements). It still needs about 150% of the Lru space but that's a lot
more reasonable. At n=5000 after a full run on the sample file the
measurements from tracemalloc indicates 785576 bytes, with
`sys.getsizeof` measurements of the different elements indicating:

- 415152 bytes for the index dict
-   4984 bytes for the small cache deque
-  37720 bytes for the main cache deque
-  38248 bytes for the ghost cache deque
- 280000 bytes for the CacheEntry objects

For LRU this is 500488 bytes of which 498752 are attributed to the
`OrderedDict`.

It seems difficult to go below: while in theory the ~9500 entries
should fit in a dict of class 14, as the dicts have a lot of traffic
(keys being added and removed) — and possibly because they're never
iterated so this is not a concern (have not checked if this is a
consideration) — cpython uses a dict one size larger to compact less
often[^dict]. However the issue also occurs in the LRU so it's
"fair" (while the OrderedDict has a Python implementation which uses
two maps, it also has a native implementation which uses an internal
ad-hoc hashmap rather than a full blow dict, so it doesn't quite have
double-hashmap overhead).

Note that this only measures *cache overhead*, so the cache keys are
not counted, and all parses result in a global singleton:

- user agent strings are around 195 bytes on average
- parse results, user agent, and os objects are 72 bytes
- device objects are 56 bytes
- the extracted strings total about 200 bytes on average[^interning]

That's some 600 bytes per cache entry, or 3000000 bytes
for a 5000 entries cache. In view of that, the cache overhead hardly
seems consequential, but still.

[^dict]: Roughly python's dict has power of two size classes, a size
         class `n` leads to a total capacity of `1<<n` and an
         effective capacity of `(1<<n<<1)/3`. The dict object is
         composed of a sparse array of indices sized to the total
         capacity, these indices can be u8, u16, u32, or u64 depending
         on the effective capacity. The dict object is then composed
         of a dense array of entries sized to the effective capacity.
         An entry is generally three pointers (hash, key, value) but
         can be just two as an optimisation e.g. for string keys (as
         strings memoise their own hash). Thus the space needed for a
         dict of class `n` is `sizeof(idx) * (1 << n) + (2|3) * 8 *
         ((1<<n<<1)/3)` (plus a few dozen bytes of various metadata).
         Thus for a dict of size n, the way to get the minimum class
         is `ceil(log2(len * 3/2))`. As such a 5000 entries
         string-keyed dict (Lru) should be in size 13 and of size
         taking about 101kB, and a ~9500 entries dict (S3Fifo index)
         should be in size 14 taking about 202kB. These are what's
         observed by straight filling dicts to those sizes, but
         churning them a few hundred to thousand times (removing and
         adding keys, keeping their sizes constant) ends up one size
         class above. I've not confirmed it but it's likely because a
         dict of size 14 has an effective capacity of 10922, which
         means every ~1500 removals and insertions the dense array
         would need to be compacted, rehashed, and rewritten. By
         bumping over to class 15, this happens every 12000 cycles
         instead, at the cost of double the memory.

[^interning]: Technically it's around 500, but single-character
              strings are always interned and those are common for the
              version fields of UserAgent and OS (about 56% of them)
              and they account for most of the possible overhead, 2
              and 3 characters strings account for a further 24 and
              17%, though with diminishing returns: 2-char strings
              seems the most promising as 93 of them are represented
              (91 being two-digit numbers) and almost all of them more
              than once (the sample file has only two singleton
              two-char strings, only one of which is a number) by
              comparison all 3-character strings are numbers but 57
              out of 251 are singletons.
  • Loading branch information
masklinn committed Mar 12, 2024
1 parent 0367c3b commit b45380d
Show file tree
Hide file tree
Showing 6 changed files with 276 additions and 146 deletions.
8 changes: 3 additions & 5 deletions src/ua_parser/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,10 @@
__all__ = [
"BasicResolver",
"CachingResolver",
"Clearing",
"Cache",
"DefaultedParseResult",
"Device",
"Domain",
"LRU",
"Locking",
"Matchers",
"OS",
"ParseResult",
Expand All @@ -46,7 +44,7 @@
from typing import Callable, Optional

from .basic import Resolver as BasicResolver
from .caching import CachingResolver, Clearing, Locking, LRU
from .caching import CachingResolver, S3Fifo as Cache
from .core import (
DefaultedParseResult,
Device,
Expand Down Expand Up @@ -77,7 +75,7 @@ def from_matchers(cls, m: Matchers, /) -> Parser:
return cls(
CachingResolver(
BasicResolver(m),
Locking(LRU(200)),
Cache(200),
)
)

Expand Down
86 changes: 55 additions & 31 deletions src/ua_parser/__main__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import argparse
import csv
import gc
import io
import itertools
import math
Expand All @@ -8,19 +9,29 @@
import sys
import threading
import time
from typing import Any, Callable, Iterable, List, Optional, Sequence, Tuple, Union
import tracemalloc
from typing import (
Any,
Callable,
Dict,
Iterable,
List,
Optional,
Sequence,
Tuple,
Union,
cast,
)

from . import (
BasicResolver,
CachingResolver,
Clearing,
Domain,
Locking,
LRU,
Matchers,
Parser,
PartialParseResult,
Resolver,
caching,
)
from .caching import Cache, Local
from .loaders import load_builtins, load_yaml
Expand All @@ -34,6 +45,17 @@
}


CACHES: Dict[str, Optional[Callable[[int], Cache]]] = {"none": None}
CACHES.update(
(cache.__name__.lower(), cache)
for cache in [
cast(Callable[[int], Cache], caching.Lru),
caching.S3Fifo,
caching.Sieve,
]
)


def get_rules(parsers: List[str], regexes: Optional[io.IOBase]) -> Matchers:
if regexes:
if not load_yaml:
Expand Down Expand Up @@ -156,18 +178,13 @@ def get_parser(
else:
sys.exit(f"unknown parser {parser!r}")

c: Callable[[int], Cache]
if cache == "none":
return Parser(r).parse
elif cache == "clearing":
c = Clearing
elif cache == "lru":
c = LRU
elif cache == "lru-threadsafe":
c = lambda size: Locking(LRU(size)) # noqa: E731
else:
if cache not in CACHES:
sys.exit(f"unknown cache algorithm {cache!r}")

c = CACHES.get(cache)
if c is None:
return Parser(r).parse

return Parser(CachingResolver(r, c(cachesize))).parse


Expand All @@ -182,14 +199,16 @@ def run(


def run_hitrates(args: argparse.Namespace) -> None:
def noop(ua: str, domains: Domain, /) -> PartialParseResult:
return PartialParseResult(
domains=domains,
string=ua,
user_agent=None,
os=None,
device=None,
)
r = PartialParseResult(
domains=Domain.ALL,
string="",
user_agent=None,
os=None,
device=None,
)

def noop(_ua: str, _domains: Domain, /) -> PartialParseResult:
return r

class Counter:
def __init__(self, parser: Resolver) -> None:
Expand All @@ -206,19 +225,25 @@ def __call__(self, ua: str, domains: Domain, /) -> PartialParseResult:
print(total, "lines", uniques, "uniques")
print(f"ideal hit rate: {(total - uniques)/total:.0%}")
print()
caches: List[Callable[[int], Cache]] = [Clearing, LRU]
w = int(math.log10(max(args.cachesizes)) + 1)
tracemalloc.start()
for cache, cache_size in itertools.product(
caches,
filter(None, CACHES.values()),
args.cachesizes,
):
misses = Counter(noop)
gc.collect()
before = tracemalloc.take_snapshot()
parser = Parser(CachingResolver(misses, cache(cache_size)))
for line in lines:
parser.parse(line)

gc.collect()
after = tracemalloc.take_snapshot()
diff = sum(s.size_diff for s in after.compare_to(before, "filename"))
print(
f"{cache.__name__.lower()}({cache_size}): {(total - misses.count)/total:.0%} hit rate"
f"{cache.__name__.lower():8}({cache_size:{w}}): {(total - misses.count)/total*100:2.0f}% hit rate, {diff:9} bytes"
)
del misses, parser


CACHESIZE = 1000
Expand All @@ -242,9 +267,8 @@ def run_threaded(args: argparse.Namespace) -> None:
lines = list(args.file)
basic = BasicResolver(load_builtins())
resolvers: List[Tuple[str, Resolver]] = [
("clearing", CachingResolver(basic, Clearing(CACHESIZE))),
("locking-lru", CachingResolver(basic, Locking(LRU(CACHESIZE)))),
("local-lru", CachingResolver(basic, Local(lambda: LRU(CACHESIZE)))),
("locking-lru", CachingResolver(basic, caching.Lru(CACHESIZE))),
("local-lru", CachingResolver(basic, Local(lambda: caching.Lru(CACHESIZE)))),
("re2", Re2Resolver(load_builtins())),
]
for name, resolver in resolvers:
Expand Down Expand Up @@ -367,8 +391,8 @@ def __call__(
bench.add_argument(
"--caches",
nargs="+",
choices=["none", "clearing", "lru", "lru-threadsafe"],
default=["none", "clearing", "lru", "lru-threadsafe"],
choices=list(CACHES),
default=list(CACHES),
help="""Cache implementations to test. `clearing` completely
clears the cache when full, `lru` uses a least-recently-eviction
policy. `lru` is not thread-safe, so `lru-threadsafe` adds a mutex
Expand Down
Loading

0 comments on commit b45380d

Please sign in to comment.