Enable RKNN2 internal multicore tensor parallel function, speed up inference latency ~1.5s. b5976ca verified happyme531 commited on Nov 12, 2024