[0] https://www.psdevwiki.com/ps4/PCIe
[1] https://fail0verflow.com/blog/2016/console-hacking-2016-post...
That fascinates me. Intel deserves a lot of credit for PCI. They built in future proofing for use cases that wouldn't emerge for years, when their bread and butter was PC processors and peripheral PC chips, and they could have done far less. The platform independence and general openness (PCI-SIG) are also notable for something that came from 1990 Intel.
As in, PCIem is going to populate the bus with virtually the same card (At least, in terms of capabilities, vendor/product id... and whanot) so I don't see how you'd then add another layer of indirection that somehow can transparently process the unfiltered transaction stream PCIem provides to it to an actual PCIe card on the bus. I feel like there's many colliding responsabilities in this.
I would instead suggest to have some sort of behavioural model (As in, have a predefined set of data to feed from/to) and have PCIem log all the accesses your real driver does. That way the driver would have enough infrastructure not to crash and at the same time you'd get the transport layer information.
Passthru or time sharing? The latter is difficult because you need something to manage the timeslices and enforce process isolation. I'm no expert but I understand it to be somewhere between nontrivial and not realistic without GPU vendor cooperation.
Note that the GPU vendors all deliberately include this feature as part of their market segmentation.
Serious work, detail intense, but not so different in design to e.g. Carmack's Trinity engine. Doable.
The other existing solution to this is FPGA cards: https://www.fpgadeveloper.com/list-of-fpga-dev-boards-for-pc... - note the wide spread in price. You then also have to deal with FPGA tooling. The benefit is much better timing.
PCIe prototyping is usually not something super straightforward if you don't want to pay hefty sums IME.
Seems unlikely you'd emulate a real PCIe card in software because PCIe is pretty high-speed.
Something like just a single BAR with a register that printfs whatever is written
Hopefully this is what you're searching for!
PCIEM_EVENT_MMIO_READ is defined but not used anywhere in the codebase
You basically have the kernel eventfd notify you about any access triggered (Based on your configuration), so from userspace, you have the eventfd and then you mmap the shared lock-less ring buffer that actually contains the events PCIem notifies (So you don't end up busy polling).
You basically mmap a struct pciem_shared_ring where you'll have your usual head/tail pointers.
From then on, on your main, you'd have a select() or a poll() for the eventfd; when PCIem notifies the userspace you'd check head != tail (Which means there are events to process) and you can basically do:
struct pciem_event *event = &event_ring->events[head]; atomic_thread_fence(memory_order_acquire); if (event->type == PCIEM_EVENT_MMIO_WRITE) handle_mmio_read(...);
And that's it, don't forget to update the head pointer!
I'll go and update the docs now. Hopefully this clears stuff up!
Usually, without actual silicon, you are pretty limited on what you can do in terms of anticipating the software that'll run.
What if you want to write a driver for it w/o having to buy auxiliary boards that act as your card? What happens if you already have a driver and want to do some security testing on it but don't have the card/don't want to use a physical one for any specific reason (Maybe some UB on the driver pokes at some register that kills the card? Just making disastrous scenarios to prove the point hah).
What if you want to add explicit failures to the card so that you can try and make the driver as tamper-proof and as fault-tolerant as possible (Think, getting the PCI card out of the bus w/o switching the computer off)?
Testing your driver functionally and/or behaviourally on CI/CD on any server (Not requiring the actual card!)?
There's quite a bunch of stuff you can do with it, thanks to being in userspace means that you can get as hacky-wacky as you want (Heck, I have a dumb-framebuffer-esque and OpenGL 1.X capable QEMU device I wanted to write a driver for fun and I used PCIem to forward the accesses to it).
I wouldn't expect that to be mainstream until after optical networking becomes more common, and for consumer hardware that's very rare (apart from their modem).
In fact, "zero~th generation" of thunderbolt used optical link, too. Also both thunderbolt and DisplayPort reuse a lot of common elements from PCI-E