• lad@programming.dev
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    6 months ago

    Randomization scares me a bit, but one can run several copies at the same time to get a better estimate, I guess. I like how you can easily obtain the granularity of an estimate after stopping, 2k is increment size after kth round.

    I wonder, what are error distributions and how probable it is to not exceed 2k of an error, maybe I should read the article, after all 😅

    Thank you for an excerpt

    Edit: looks like if we have (ε, δ)-approximation if distribution of data, error would be less than δ/4