aboutsummaryrefslogtreecommitdiffstats
path: root/lib/bb/cache.py
diff options
context:
space:
mode:
authorRichard Purdie <richard.purdie@linuxfoundation.org>2014-07-25 14:54:23 +0100
committerRichard Purdie <richard.purdie@linuxfoundation.org>2014-07-25 15:23:28 +0100
commit4aaf56bfbad4aa626be8a2f7a5f70834c3311dd3 (patch)
tree1f6acc9f586e17ffe573a21ce0754e9ea14585f5 /lib/bb/cache.py
parentc22441f7025be012ad2e62a51ccb993c3a0e16c9 (diff)
downloadbitbake-4aaf56bfbad4aa626be8a2f7a5f70834c3311dd3.tar.gz
codeparser cache improvements
It turns out the codeparser cache is the bottleneck I've been observing when running bitbake commands, particularly as it grows. There are some things we can do about this: * We were processing the cache with "intern()" at save time. Its actually much more memory efficient to do this at creation time. * Use hashable objects such as frozenset rather than set so that we can compare objects * De-duplicate the cache objects, link duplicates to the same object saving memory and disk usage and improving speed * Using custom setstate/getstate to avoid the overhead of object attribute names in the cache file To make this work, a global cache was needed for the list of set objects as this was the only way I could find to get the data in at setstate object creation time :(. Parsing shows a modest improvement with these changes, cache load time is significantly better, cache save time is reduced since there is now no need to reprocess the data and cache is much smaller. We can drop the compress_keys() code and internSet code from the shared cache core since its no longer used and replaced by codeparser specific pieces. Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
Diffstat (limited to 'lib/bb/cache.py')
-rw-r--r--lib/bb/cache.py12
1 files changed, 0 insertions, 12 deletions
diff --git a/lib/bb/cache.py b/lib/bb/cache.py
index c7f3b7ab7..f892d7dc3 100644
--- a/lib/bb/cache.py
+++ b/lib/bb/cache.py
@@ -764,16 +764,6 @@ class MultiProcessCache(object):
self.cachedata = data
- def internSet(self, items):
- new = set()
- for i in items:
- new.add(intern(i))
- return new
-
- def compress_keys(self, data):
- # Override in subclasses if desired
- return
-
def create_cachedata(self):
data = [{}]
return data
@@ -833,8 +823,6 @@ class MultiProcessCache(object):
self.merge_data(extradata, data)
os.unlink(f)
- self.compress_keys(data)
-
with open(self.cachefile, "wb") as f:
p = pickle.Pickler(f, -1)
p.dump([data, self.__class__.CACHE_VERSION])