Jump to content

Real world experience with large (>100 universe) ArtNet systems

Dave Hallett

Recommended Posts

So we've put in a fairly large ArtNet system which is running a large amount of pixelmapped LED for a show at the moment. We've had a couple of issues along the way - some of which I can talk about at the moment, and some are still being discussed with manufacturers so will come up for discussion once resolved. I'm after an exchange of information with people who've run similar networks. I'm fully aware that strictly speaking according to the rules some of this shouldn't work...

I did some searches about large ArtNet systems before putting this together and didn't come up with much - hence this post.


Total system:

12224 RGB fixtures. 36,672 DMX channels. 167 universes over ArtNet.

split into 2 pixel maps running on the same media server.


Map 1

8642 fixtures, 25926 DMX channels, 73 Universes. All unicast - data only sent to the IP address of that output device (Schnick Schnack System PSU 4E - native ArtNet input)


Map 2

3582 fixtures, 10746 DMX channels, 94 universes. All broadcast - data sent everywhere on the network. Outputting via several PRG Series400 Node+ with ArtNet firmware.


The physical network only carries the ArtNet data - it doesn't have to co-exist with an existing building network. It has a gigabit switch backbone (every switch which has another switch downstream of it is gigabit), and the switches on the edges of the network are 100Mbit. All switches are unmanaged (although there is a managed switch with mirrored ports which I occasionally insert to the network for network load and data monitoring). Network traffic out of the media server is a pretty constant 25 - 30Mbit/s although it'll get up to 80Mb/s with the Artistic License bandwidth checker program running flat out.

Long runs (10m+) done in Ethercon, short cables are standard machine made CAT5e patch cables.


There are 94 devices on Map 2 which have varying numbers of LEDs on but all have to start at DMX 1. Hence some of those universes only have 36 DMX channels active - so there are only 114 channels per universe on average. This makes the physical wiring of the LEDs much easier, and as the PSUs had to be made new in a very short time before Christmas it was easier to do it that way. It also made the pixel-mapping process easier.


Has anyone here put similar networks together? Got any hints? Any questions? I'd rather not detail some of it (what the show is, what the media server is) as yet until I've had permission from people so don't bother asking or speculating about those things yet.


Cheers. Dave.

Link to comment
Share on other sites

I have not done it with ArtNet - but I have with MANet2 and 16 NPU's - I did a show with 1220 of Happy Tubes (122 Universes) forming a canopy and a couple of universes of movers and dimmers. I had a GigE Cat6 to fibre converter at the desk, which did a 120m fibre run to the roof into a GigE managed switch (It was run pretty much unmanaged) . I paid a lot of attention to how I ran all the tubes - I managed to get them radiating out from the centre of the roof to allow me to keep my cable runs simple. There was no media server involved - so correct grouping of fixtures was critical to make dealing with the canopy easy.


As far as infrastructure goes, I would suggest limiting the number of switches in the equation. Also use a good quality switch in the centre as it is critical that that switch can deal with the traffic. If you are using a bunch of small (say 8 port) gigabit switches cascading you run into issues with the center switch running out of memory in it's lookup table. On the other hand, if you have a decent quality 32 port switch in the middle and only cascade once, then there is a good chance that that central switch will be perfectly happy storing all those MAC addresses and it will run nice and smooth.

Link to comment
Share on other sites

Never having done anything like this, bas as someone who in the past was paid because he knew a lot about networking, my wonderment goes to broadcast load.


First a bit of background as to why, and apologies if I'm trying to teach granma (ha - poor pun!) to suck eggs.


With unicast packets, the actual networking interface hardware will ignore any packet not addressed to that particular interface, more strictly speaking, it only bothers the CPU with a packet of data if the MAC address of the incoming packet matches the MAC address of the interface.


With broadcast packets, every packet needs to be sent up from the network interface to the CPU. The CPU then has to analyse the packet and determine if it is a packet of interest, and if not, ignore said packet. On most networks, most broadcast packets aren't of interest to most devices on the network.


So, the worry is that any ethernet devices you have on the network will have their processors interrupted by broadcast packets. If you have a fair broadcast rate and devices with low powered CPUs, they can get overwhelmed by the processing load of broadcast packets.

Link to comment
Share on other sites

Hi Mac. That's a good point about the MAC lookup. The Luminex gigabit switches have a 4k lookup table - so in this instance (with only 50 or 60 physical devices on the network) it's fine but it would be worth considering for someone coming to this sort of thing fresh with a new project. That's obviously the point of this thread so more hints like that from the assembled company please!


As far as I know the 'biggest' switch which is natively ethercon is the Pathport Via 10 port model, and we only had 8 port Luminex available. In a permanent installation I would agree that a larger central switch would be more important than the ethercon connections. In our temporary studio setting with half-wits trampling all over things the rugged connections are vital.


Hi Tim. I'll give you a call tomorrow to compare notes...


David, the broadcast element of this system is definitely where it breaks the most rules. I am fortunate in that both the output devices are 'proper' and can cope with the storm of information arriving with them. The snag is that the PRG Nodes have IP addresses that are hard set in the 10.x.x.x range (although they listen for broadcast data in the 2. range). The rest of the system is currently in the 2.x range, and I need to check whether every part of the rest of the system will respond properly in the 10.x range.

If it will (as I believe the ArtNet specification encourages) then I may move the whole system over at which point I should be able to unicast the whole lot, and thus reduce the load on the output devices. I also need to do that in such a way that there's time to test it thoroughly so there's no chance of it all going horribly wrong when we're live on the telly - which would be disappointing. Obviously there's a whole pile of real lights to look after too so there's not been the time to do that yet. If I go that route I will of course update the thread.


Cheers. Dave

Link to comment
Share on other sites


This topic is now archived and is closed to further replies.

  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.