• 0 Posts
  • 15 Comments
Joined 3 months ago
cake
Cake day: October 9th, 2024

help-circle



  • That article gave me a whiplash. First part: pretty cool. Second part: deeply questionable.

    For example these two paragraphs from sections ā€˜problem with codeā€™ and ā€˜magic of dataā€™:

    ā€œModular and interpretable codeā€ sounds great until you are staring at 100 modules with 100,000 lines of code each and someone is asking you to interpret it.

    Regardless of how complicated your programā€™s behavior is, if you write it as a neural network, the program remains interpretable. To know what your neural network actually does, just read the dataset

    Well, ā€œjust read the dataset broā€ sound great sounds great until you are staring at a dataset with 100 000 examples and someone is asking you to interpret it.





  • Yeah, neural network training is notoriously easy to reproduce /s.

    Just few things can affect results: source data, data labels, network structure, training parameters, version of training script, versions of libraries, seed for random number generator, hardware, operating system.

    Also, deployment is another can of worms.

    Also, even if you have open source script, data and labels, thereā€™s no guarantee youā€™ll have useful documentation for either of these.