85
edits
No edit summary |
No edit summary |
||
Line 83: | Line 83: | ||
I guess you could do this with Python+Jupyter as well, but I dunno, this just really feels nifty to me, like it's a natural extension of the REPL experience. It also helped massively while testing out threshold values for the image to see what would work best for classification. | I guess you could do this with Python+Jupyter as well, but I dunno, this just really feels nifty to me, like it's a natural extension of the REPL experience. It also helped massively while testing out threshold values for the image to see what would work best for classification. | ||
I wired it all up by having FFmpeg dump out a series of BMP images to a pipe (so I could quickly parse the buffer size and read it into | I wired it all up by having FFmpeg dump out a series of BMP images to a pipe (so I could quickly parse the buffer size and read it into <code>lisp-magick-wand</code>) and set up a parallelized loop to call `classify` and store the parsed-out data. | ||
<syntaxhighlight lang="lisp"> | <syntaxhighlight lang="lisp"> |