thoughts on battlemage for AI?
llm performance seems lower than it should be, but it kicks ass for diffusionat least for the B50, anyway
>>108456404As of now, not having CUDA is still limiting, but if it's cheap enough people will develop alternatives that work on them and it will contribute to break Nvidia's moat. People seem to have managed with Apple and MLX, this should be more of the same I guess. Having an ecosystem that is more compatible across brands is good for everyone long term.
>>108456404Cant speak to the AI use case, but my B580 causes my system to crash with BSODs every time I try to tab out of games.
>>108456521>Cant speak to the AI use case, but my B580 causes my system to crash with BSODs every time I try to tab out of games.which os, obligatory "is everything updated"?
>>108456543Windows 10.Yes.
>>108456591rebar on?
>>108456625You got me hopeful there cuz I hadn't checked that before... But sadly yea it's already enabled.My performance in the games is actually pretty damn good, right up until it BSODs with >Stop code: DRIVER IRQL NOT LESS OR EQUAL What failed: igdkmdnd64.sysI was getting the crashes slightly less frequently with the drivers from a few weeks ago, so I might roll back to those.
>>108456521AFAIK the B70 doesn't even have full Windows drivers yet, never mind anything for games. It's Linux-first for local AI inference.>>108456404If the model compatibility doesn't suck, B70 would be amazing for its price point.But that's a big "if".
>>108456462How are the speeds for video inference? I'm wondering if the extra RAM at this cost is worth the hit to speed for similarly priced card like a 5070.
>>108456720meant 5070 ti
>>108456720You know, I actually haven't tried video yet. If you send a comfy workflow (or just a particular model you have in mind), I'd be more than happy to see how it does.
>>108456710can arc cards be set to higher fan and lower power draw?
>>108456717>If the model compatibility doesn't suck, B70 would be amazing for its price point.>But that's a big "if".For inference, another thread was mentioning a vLLM fork by Intel, so that'd be good. Training might take more time though I guess.
>>108456730I've only run it on poo hardware.If you just use the default comfyUI workflow for wan2.2 and gen in around 1024x1024, default for rest, i think that would at least gauge vs my shitty radeon card.
>no cuda on ailol, lmao even
>>108456730>>108456764Oh, you might have to lower it to 512x512 if you run out of RAM. AMD cards at least OOM easy. Not sure about intel ones, but I have 16G to work with.
>>108456783Although the 3090 had cuda, the B70 basically is meeting its performance, with more ram, and on fewer watts.effectively, the used rtx 3090 is useless for ai now that the B70 is here. However, the B70 hasn't been used to play a single game yet, and idk if it will ever be capable of gaming.
>>108456801k
>>108456801>meeting performanceWith lower memory bandwidth and fewer TFLOPs? By both specs its 66% of the speed. Higher memory and lower power are the only reasons to want this.
>>108456794Looks like we got a full load (easily) @1024x1024 and the default length of 121 (is that seconds?)GPU is pegged at 100% usage and the first iteration took about 49 seconds. ETA is about 15 minutes from now.
>>108456839>is that seconds?It's frames.>15 minutesHonestly not terrible considering that has about half the speed of B70 if I'm reading the press release right. Thanks for running this, anon!
>>108456832Sorry, the 3090 should be slightly slower. Still pretty impressive for its age, and obviously the comparison in terms of game compatibility is a joke, the 3090 runs everything.
>>108456404Works better than AMD but like any alternative it's second class to NVIDIA.Check your workflow. If it has SYCL / OpenCL it's good.>>108456521I have an A770 and I don't have problems although I don't play on release day.
>>108456887How fast would the 3090 be?I've never done video, just images. zit mostly now.
>>108456944All I have is this. If you want something more comparable to pic related, you can bypass that LoRa and run 81 frames @ 1280x720 and use the non-MoE version (its only one pass instead of two). There are probably optimizations you can do with launch options, but some of them don't work well off nvidia, from what I've tried.
>>108456737>a vLLM fork by IntelOh hell yeah, it's 100% for inference.Anyone mentioning gaming as an actual use for this card is a moron. I doubt Intel will even bother releasing driver updates for this that does anything for games.
>>108456404I'm not putting Israel in my pc willingly.
>>108457034Whenever the other battlemage (consumer) cards get gaming updates, those changes eventually get rolled into the battlemage (pro) drivers, including gaming optimization. You can even just run the battlemage (consumer) drivers if you do a lot of gaming on your workstation for some fucking reason.Also on linux, there's not even a distinction between the 2 drivers and everyone wins.
B70 may be boring, but it's the most important news of this year, in tech.
>>108457103That would be cybercab, but at least in the PC hardware space I agree
>>108456518>but if it's cheap enoughThe B70 is gonna be $1000
>>108457086This is correct, and the guy you replied to is a retard. In fact you should probably use the non-pro drivers unless you’re using it for CAD. I but a B50 in a workstation computer and it’s actually fine for light gaming.
>>108456404>thoughts on battlemage for AI?It's a great card, I have a B580.And the limiting factor is that there is a lack of openvino support for AI stuff and models with OV extension so they will take better advantage of the hardware you have llamaC++ GGUF but that's not as fast because vulkan isn't as optimized for these tasks that's why on NVIDIA you use CUDA instead.And this is AI inference
>>108456404B770 doko
>>108456717>It's Linux-firstextremely based if true. so Xe driver already supports it?
>>108459337>>It's Linux-firstNot form my experience, I thought the experience on Linux would be far better, the xe driver kept crashing and looping (god forbid you want hardware acceleration on firefox in 2026) and trying to get things like comfy UI (for testing) to work is fucking bad. I have that shit on windows and it just works.The only thing is that I have no idea how to use it properly I have made some shitty gens but that's it. on Linux generation process kept not working as it didn't see any vram.Even though name, I am not over reacting the only thing that worked well under linux was LM Studio but that's once again LLamaC++ which uses vulkan If you can't do that in 2026 it's fucking foobar.
>>108459397tbf, the AMDGPU driver keeps crashing for me for weird reasons and lifetime issues as well.
>>108457259For a 32GB VRAM card, that is cheap.
>>108456404I am interested. But can I create my own uncensored Claude with this?
>>108459574>uncensored ClaudeAbliterated versions of other AI models that already exist.>I am interested. But can I create my ownI don't think there are any GPUs intel created that are powerful enough for that, ARC does not have something like SLI & crossfire. Maybe I am mistaken there is a PRO card that is but I doubt it.
>>108456462>diffusionhave I been lied to? I've been told only llm backends support intel arc. does diffusion work too e.g. comfy? like with no problems?
>>108456404petaflops? huh?
>>108460560Not only does it support that, but they have a tool which sets it all up for you, offering an easy-mode way to access it. https://game.intel.com/stories/introducing-ai-playground/
>>108460560>does diffusion work too e.g. comfy? like with no problems?Yep, on both windows and linux. Doesn't seem any less stable than ROCm on my 6950 XT
>>108461205Yeah. In fact as of last testing, it was faster. AMD is getting better with Pytorch but it still isn't stable. https://chimolog.co/bto-gpu-stable-diffusion-specs/https://chimolog.co/bto-gpu-wan22-specs/This card might be around 4070 Ti level between the 5070 and 5070 Ti.
>>108460653What about fine tuning for diffusion models? I heard some people did it with kohya scripts, but I don't know the details.
>>108456404Couple weeks ago someone mentioned that his intel GPU is much better for AI than his RDNA2.
>>108456404>muh aiKYS
>>108461686And with a lot more memory.Do you know if comfy has a way to vae preview all of the images in a batch gen?