![]() ![]() Since the GPU on the max is about 3x faster, it stands to reason that running VEAI on the GPU would be slightly faster than on the neural engine, at the expense of resources that could be used by other programs.Ģ.) Currently, one instance of VEAI seems to pretty much fully utilize the NE resources, so whereas the old version on the GPU would allow you to run two instances of VEAI at once and both of them would be somewhat more than half as fast as one instance. ![]() Considering the 20% performance hit caused by emulation, it stands to reason that VEAI running natively on the gpu would be somewhat less than 3x slower. This seems to be supported by the fact that on my base M1, when VEAI had previously been emulated by Rosetta, once the apple silicon native version came out and it started using NE, it became about 3x faster. I’ve seen some interesting tests - for example: - that seem to suggest that despite the neural engine’s specificity to AI performance, the GPU on the Max still outperforms it in AI tasks under heavy loads. I’m using an M1 Max and VEAI seems to be using neural engine, which is great since it barely takes up any resources that I would use for most other programs, but the added versatility would be great for a few reasons.ġ.) The neural engine seems to be the same on all the current Apple silicon models, whereas the GPU on the max is a beast. ![]() I have a feature request - It would be great if Apple Silicon users had the option to use the GPU to a much greater extent, particularly on models with more powerful GPUs. Hey, wasn’t sure where to submit this as I don’t actually need support, per se. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |