Dailydave mailing list archives
Re: AI
From: Allen DeRyke <allen.deryke () gmail com>
Date: Wed, 30 Mar 2016 16:07:09 -0400
For the time being I remain skeptical of ML "solutions" to intrusion detection problems. In BJJ image processing its fairly simple for humans to sanity check the results of ML. Humans are very good at image processing which means we're pretty going to do a good job spotting ML errors while working over the training data. Our ML progress in the image processing space is a byproduct of our innate biological adaptations for image processing. When we start applying ML to problem spaces that humans are not very good at, we run the risk of implementing solutions that very few humans can grok. If you don't have a good grasp on the problem, and training data where an answer is known, then your ML solution might just be snake oil 7.0. What assurance do we have that an intrusion on one persons infrastructure will "look" anything at all like an intrusion on another persons infrastructure? I think the desire to have a magic blinky light appliance in a data center somewhere will virtually guarantee that ML solutions will be created, marketed and sold. After wide scale implementation I think we'll notice that the dwell time and overall intrusion impacts remain about the same. -- Allen Deryke On Wed, Mar 30, 2016 at 8:56 AM, dave aitel <dave () immunityinc com> wrote:
There are only a few real computers in the world, and I think we are just beginning to feel their influence. For example, here is a sample project I am working on now that image classification is a solved problem. Like many of you on this list, I dabble in brazilian jiu jitsu. In fact, in a week we are doing an open mat at INFILTRATE for both newcomers who've always wanted to try to choke me out, to people in the community who are already very good at choking people. Like many sports, BJJ is typically scored according to a ruleset based on the different positions you end up in. Being on top is usually better. Being able to get on top after you are on the bottom is worth 2 points. Being able to completely mount someone is worth three points. Getting on their back is four points. Generally a tournament will hire judges and they will award points based on their understanding of the rules and their personal feelings towards the contestants and whatever other factors are floating in their heads. What I'm working on is collecting a set of images of BJJ, then annotating them as to what positions the different people are in. This essentially maps every image into a vector space - and after training a neural network using modern techniques you can have a program that looks at an image and then outputs "Blue is in top mount". Part of the key here is that you don't have to tell it that the picture is BJJ. Every picture that program sees is two people doing BJJ. All it has to do is output what positions they are in. And in the end, by assigning point values to transitions between positions, you will have an automatic BJJ judge. I've applied for a TensorFlow API key from Google since although this is not a hard problem by ML standards I want to do it the right way and get good scalable results on video later. And of course, the same thing is true for the process information El Jefe will give you. All those "behavioral analysis machine learning intrusion detection" startups are about to be crushed by simple open source projects that use Google and MS and Amazon's exported Machine Learning APIs. -dave _______________________________________________ Dailydave mailing list Dailydave () lists immunityinc com https://lists.immunityinc.com/mailman/listinfo/dailydave
_______________________________________________ Dailydave mailing list Dailydave () lists immunityinc com https://lists.immunityinc.com/mailman/listinfo/dailydave
Current thread:
- AI dave aitel (Mar 30)