[cabfpub] [Trans] What's the load on a CT log?
benl at google.com
Thu Mar 13 16:51:20 UTC 2014
On 13 March 2014 16:31, Daniel Kahn Gillmor <dkg at fifthhorseman.net> wrote:
> On 03/13/2014 12:06 PM, Ben Laurie wrote:
>> So, total average load is 3 * b * w / l ~ 20,000 web fetches per
> This part i follow (you're switching temporal units between months and
> years and seconds, but i get roughly the same final figures)
>> If we optimise the API we can get that down to 7,000 qps. Each
>> query (in the optimised case) would be around 3 kB,
> And i agree this seems like a win. Why was the API broken into three
> parts instead of the complete proof originally? what (other than
> conceptual cleanliness) might we lose by creating the optimized API?
>> which gives a bandwidth of around 150 kb/s.
> This looks off by a few orders of magnitude to me. 7kqps and 3kB/q
> gives me 7000*3000*8 bits per second, which is 168Mbps. Am i missing
Sorry, you are correct - I meant 150,000 kb/s!
> Should we be considering swarm-based distribution of this kind of data,
> or hierarchical proxying for load distribution?
One thing we're working on is distributing the proofs via DNS, which
is obviously of exactly that nature - which would definitely reduce
bandwidth at the servers.
More information about the Public