-Integration of neuropsychological concepts to the architecture design and training of transformer models
-Integrating statistical methods to img2img diffusion models for image processing
-Token vectorisation for thematic analysis
-Small language models for task execution
I have been seeing a specific type of AI hype more and more, I call it, releasing research expecting that no one will ever reproduce your methods, then overhyping your results. I test the methodology of maybe 4-5 research papers per day. That is how I find a lot of my research. Usually, 3-4 of those experiments end up not being reproduceable for some reason. I am starting to think it is not accidental.
So, I am launching a new series where I specifically showcase a research paper by reproducing their methodology and highlighting the blatant flaws that show up when you actually do this. Here is Episode 1!