Should your business run its AI inference on the same stack it runs training on, or create specialized stacks? Omdia expects specialized ASICs and ASSPs to be important competitors to GPUs. Which strategy is right for you?