Jump to content

Anothertom

Regular Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by Anothertom

  1. To answer the questions you posted: 1. Yes, use a camera cue. 2. A capture card to take a video feed from the powerpoint pc that is supported in MacOS and QLab. QLab can access video capture devices through camera cues, you can then apply the same geometry and effects to them as a normal video cue. If you upgrade to QLab 5 you can utilise NDI screen capture to send the video feed over a network, but for various reasons a hardware capture card is a better solution. Control wise you would be sending TCP commands direct from QLab, not OSC. When you set the destination you can tell it what protocol it's sending. If you struggle to get it work directly, you could send OSC from QLab to companion which will probably have a nicer integration with the projector. This is definitely a cheaper option, but a more robust solution would be to put a presentation switcher between the PC and the projector. That would allow you to switch between multiple sources cleanly, and utilise things like still stores PiP effects and being able to freeze or blackout the signal. It's also a more reliable device to be the single point of failure than either a PC or a Mac. Assuming your projector is 1080p or lower, you don't even need anything particularly fancy or recent. Something like a PDS 901/902, Analog Way Pulse2 3G, or similar Roland switcher.
  2. Equally a cheap (like, really cheap, 5 year old, starting to die) laptop/nuc/pc will do this in software. (QLC+ or magicQ spring to mind as free options that I know will do this.)
  3. As an alternative to playing around with MSC (which I do admit I only ever use because I like being able to say I'm using it and not because it's ever implemented well), you could use an OSC router to take in commands from your lighting console and send the relevant commands to each end-point you have.
  4. While you may have some equipment to hand, you don't have the right equipment to hand. You're thinking of your Atem as a piece of content, but it's just a switcher. Your content is what you have upstream of the atem. In this scenario you need to replace your atem with something more suitable to the job. What you need is a piece of software or hardware able to handle the full resolution of your LED wall and map inputs to sections of that full canvas, either a software media server/playout software or hardware presentation switcher. In this situation I'd probably be speccing a PDS-4k or an S3-4k, if the requirements really are that simple. Your Atem mini isn't going to cut it. If you think even just on the lines of pure resolution. Assuming an LED product at 2.6mm, which is pretty much standard for a corporate setup, your full resolution will be 4128x1376. Your Atem's output is at 1920x1080, which means you're scaling up by 2.15x to fill the width, which also means the usable height of your output is just 502 pixels being stretched on a 4m high wall.
  5. Assuming you come back to this post... The first thing I would suggest is to just contract a production company to provide the whole solution for you, the fact you're asking this question means you're probably not in the position to deliver it. Not going to plug anyone in particular but there are plenty up and down the country that can very easily deliver this LED solution. Focussing on your content workflow question, an Atem switcher is not the best solution. They are purely designed around live camera workflows and fully 16:9 systems. You would be better off with what's known as a presentation switcher (e.g. barco PDS-4k or similar). These support a much wider range (including custom) resolutions on both input and output, have full compositing system that is further separated from I/O, support multiple input formats and connections. An alternative (on a tighter budget) would be to run as a software solution with video capture; vmix, resolume, Qlab, and many others could be used to do this very easily. Some people will suggest putting scaling between the content and processing to force the content to fit the resolution of the LED, but that only results in increased latency and warped content. There's no need to do this in the majority of situations.
  6. What trouble are you having? You just download the installer and put it on a drive. Can you download it to a different file path then copy it across? have you tried a different drive? What storage format is the dive you're using?
  7. So assuming you feed an 16:9 source to the projector and a .39:1 lens, that gives a rough estimate of 2.08m from the centre of the lens mirror to the surface, which is or isn't terrible depending on your space. And gives you 4 of the 16 to space the two screens if you went with that, or two screens that can be connected to make a single larger screen, for more flexibility. It's all about ratios, the further the projector from the screen, the bigger the screen can be. At the very least hiring for a proof of concept before purchase. An AV hire company may also be a retail partner, who can assist with competitive pricing as well as any servicing you might need down the line. Having just re-read everthing, I now realise you're not talking about theatrical production, which possibly gives more options for using longer throw lenses. But that would be dictated by each individual setup, rather than being able to say X lens will always be right. There are zoom lenses, but generally only once you get towards 1:1 or higher (i.e. more throw than projection width). You can probably also get away with much smaller projectors, as you aren't directly competing with lights pointing at the screens and it won't be such an issue if the content gets blown out by a light every so often. Possibly some 5k's with small zoom lenses.
  8. Some information lacking, for a useful answer to your questions: how big are the two screens you would like to use, what's the total width, and what gap would you have in the middle (might help if you draw this out)? Are the screens flown or ground supported? And what throw distance would you have for the projection? An ultra-short throw lens is typically around 0.39:1 (so .39m throw for 1m width, and will be a fixed ratio. I'd suggest contacting any friendly rental AV companies you use for specifc models, you definitely don't want to be purchasing for a one off. Your suggestion for control and playback is fine, I'd suggest mapping the master opacity to a DMX channel so you can enforce blackout if something goes wrong. There are other ways you can avert disaster, but that's a good start.
  9. I'd probably have suggested finding a 2011 macbook pro, upgrading the HDD to an SSD, and using that. Reported compatibility up to mac OS 10.11. But going with a more modern product is probably better in the long run, and you now have a spare focusrite.
  10. Or something that'as not EOL, like magicQ, which will do 64 universes for free and get updates. Neither dot2 or magicQ will allow usb devices to be used for output directly. (You could technically use qlc+ as a artnet/sACN to usb layer, but that's a silly amount of risk to avoid buying a one output artnet node)
  11. Anothertom

    Visualisers

    Personally I'm a big fan of capture, but as you've ruled that out let's explore some other options... L8, as you mention, has some vey cheap options, but with limitations. For the €88 option you have fixture limitations, no paperwork, limited input universes. A nice helpful table of what you get with each license: https://l8.ltd/m/ Depence2 is a very pretty visualiser, can be used to create some really good looking renders. Also not much cheaper than a Capture Symphony license. Lightconverse: also not cheap, but has lower tiers as well. Vectorworks Vision: similar price, but can be a one off purchase, very "industry standard". There are loads of really terrible abandoned projects where people have thought "lets build a visualiser and make it free and amazing", and then realised that making something usable is quite difficult. Also: https://xkcd.com/927/ What's worth considering is that many lighting consoles come with some level of visualisation. Avolites (capture), MA (MA3D or built in on MA3), Chamsys (magicVis), are freely included with the consoles. You can also use merging to use these visualisers with other controllers by sending data through the native console (or pc based solution). Using MA3D or magicVis with a different data source is very simple to set up and completely free. Avo would require licensed hardware of some kind. Really depends what you're going to use it for, and if you're going to need to produce files other people can work with.
  12. Don't go down the hyperdeck route, this isn't what they're designed for and will only impose restrictions on what you can do. The solution would be a mac studio running qlab. Up to 5 outputs, which will do what you're looking for, but I would suggest (as you're staying at 1080p for now) also go down the FX4 route. The mac will be much more reliable when using fewer physical outputs, and you don't need to be as specific when selecting adaptors and outputs. HDMI for a GUI head, then thunderbolt to Displayport to an FX4 SDI, SDI distribution to projectors.
  13. Yeah, there isn't really a way to do rentals, other than of a person with a license. As a person with a full capture license, and proficient with it, If you would like some paperwork or pre-vis file creating; feel free to PM with more details or to discuss what you need.
  14. As sameness said, set everything to the same frame rate and same scan mode. There's generally no need to be running interlaced anywhere in a live event scenario, so my preference would be either a full 50p or 60p* depending on region. You should then also have a sync generator in your chain and feed that to all cameras, your switcher and any other routing that can accept a genlock signal, and this should also be set to the frame rate your kit is operating at (technically the other way round, but set it all the same). The videos you shared, that's definitely tearing, but looks rather severe. I'd suggest that you verify the output of the camera on either a field monitor or through a format converter and into a normal screen. You should also check that anything you're viewing the output on is receiving and reporting the correct signal. You're right that the atem mixers process the multiviews separately, they can't really be relied upon for critical viewing. *Exemptions apply
  15. I find this an interesting development for a lighting desk as media servers (or at least disguise) have had the 'selectable output cards' for a while, which is great as you can change your outputs for the specifics of the job. I think it's definitely driven by the intention of heavily utilizing network protocols for both output and control, which the inclusion of two 10Gbps SFP's and 4 1Gbps ports onboard is geared towards (which is a mental amount of networking capability for a lighting console). I'm surprised there's no permanent physical outputs at all, and also that you need to use a bay each for midi and LTC which should probably be built into the hardware, although OSC is much more of a thing these days, and other network based control is available so at midi at least isn't such a deal breaker. I'd guess a 'standard' IO config would probably be Midi, LTC and two DMX modules.
  16. That depends what the client wants. If they specifically want the presenter view (as it gives current slide, next animation and notes) then you simply need a laptop that can handle two external outputs (or a desktop with 3). This does mean that at least one will need to be using an active converter from displayport to whatever transport you're using (presumably HDMI to an SDI converter) and this can be via USB-C or thunderbolt etc... but will need to be active. If you're on windows, a really nice way to do this is with a cheap MST hub, I was pleasantly surprised when the anker usb-c to dual HDMI was actually a proper MST hub, and it's probably the cheapest one I've ever seen that actually works. Of course if you have physical HDMI and a mini-DP then you can just use those anyway. You can always just run a second laptop in sync with the outputs reversed, but it seems silly to be doing this in 2022.
  17. Those non-brand Chinese hdmi extenders at notoriously terrible. And only having power at the sender is just asking for trouble. If you wanted to extend over cat6/7 then you should be looking for a more reputable HDbaseT box (e.g. liteware, from personal experience). Those would be significantly more expensive, but will reliably work. Lower quality sender/receivers might work at lower resolutions but you might get intermittent drop outs. Performance is also more related to CSA of the cable than anything else afaik, as it's not IP traffic and not bound by that standard. The (preferred) alternative would be to get some hdmi-sdi converters (blackmagic micro converters for the cheaper end of this) and run in an SDI line to the projector, if you want a local preview then a cheap HDMI splitter would be the solution.
  18. So long as you have physical MA hardware which unlocks parameters (e.g. command wing, onPC nodes or NPU) you can output artnet. Resolume will act as the destination node, so you don't need an artnet device on the network. If you're doing this on a single machine you need to make sure you're using the 127.0.0.1 localhost IP within MA Network Control, If you're doing this across multiple machines then make sure they're on suitable IP addresses. You can create an artnet output in MA through Setup > Network Protocols > Art-Net, then "add new". Set it up as an output, unicast is (generally) preferable if you're using two machines or within a larger show ecosystem, but if you're trying to do this on a single machine then you might need to set it to broadcast or set the Destination IP to 127.0.0.1. Depending on which version of resolume you're using there are either default composition and layer DMX mappings available (up to arena 5) or you need to manually create your own DMX mapping (arena 6 & 7). You can use the channel layout of the default maps so you can use existing fixture profiles, but you need to manually assign each DMX channel, otherwise you also then need to create suitable fixture profiles to match the channel layout you use. Create the DMX mapping through "Shortcuts > Edit DMX", right click on something to control, hit the option it brings up then set the universe and channel being used to control it in the bottom right. In Resolume > Preferences > DMX, add a lumiverse input and select the correct network adapter with the IP address MA is sending to (if on a single machine then choose localhost). Make sure the lumiverse number lines up with the artnet universe number used within MA Once you've set up the DMX map in resolume, patched the fixture in MA (or used a bunch of dummy dimmer channels as individual controls) and set up the artnet output and input within MA and resolume you should be able to see the incoming data within resolume by going to preferences>DMX then hit the little triangle in a circle (top right) which displays incoming data. All this information is available within the online manuals for MA and Resolume.
  19. Either an external merge, or doing it inside the MA, since you're already MA-Net'ing it up. If you park the DMX channel at zero within MA and setup an external DMX input (can be done hardware or artnet/sACN etc...) Then set that universe to merge with the universe they're patched on. Just make sure the channels line up from what you're sending to the MA, and what addresses the rings are within their fixture profile. If you had lots of fixtures you wanted to do this on (or have a very parameter heavy show) then going external would probably be better, as it reduces workload on the console and system complexity, but if it's not too many then doing it internally should be fine.
  20. As nic says, I don't think capture simulates lasers directly, but you can use laser control programs to feed data to capture, but isn't something I've ever looked into. The manual I found (https://www.laserworld.com/en/support/downloads/download/80-laserworld-club-series-archive/1036-laserworld-manual-club-series-cs1000rgb-mkii-cs-2000rgb-mkii-2015.html) provided this page of detail (https://I.imgur.com/zeVW44j_d.jpg) which only needs de-chinglish-ing. If you have issues in a live scenario, to test that it responds properly, start by using 11 dimmer channels rather than any fixture profile, and verify/test that each channel is what it suggests it should be.
  21. If you enter the name of your laser into Google, the first link will take you to a download of the manual. There you will find much useful information about your laser, including a breakdown of the DMX chart. As for the visualiser element of your question, presumably you're using the most up to date version and library your license allows? I've never looked at the simulation of lasers in capture but I don't see why it shouldn't respond as in real life. If you control it directly from capture does it respond properly? I'll fire up capture and see how it behaves.
  22. This seems like a strange issue, and I'm not saying I can absolutely fix your issue... But the first thing I would try is taking the up-down-cross out of this system and try to replicate the symptoms on a smaller scale. Feed it a 1080p50 over hdmi and spit it out as SDI (at the same frame rate and resolution) and see if it still drops out.
  23. It may be out of budget, and you didn't mention the scale of the production, but have you considered using video floor for this? You mention wedding flooring but not video floor specifically. It would reduce effort, would obviously have a built and fit for purpose structure, and be rated for the amount of weight that would be on it. You probably wouldn't need a particularly narrow pixel pitch if you're going for full colour panels (and there's the required distance between subject and audience) and you may be able to use a second diffusion layer on top of the shader (as it may not have the required off axis visibility). You'd probably want to do some small scale proof of concept tests. Obviously costs will vary with quality, and would be a lot more than that of building some boxes out of ali and timber and chucking some led tape in it, and if you've not already got video aspects to the show then it's a whole extra dept of planning and expertise, but just a thought.
  24. Not sure about the unit you saw, I'm not aware of any units like this with a physical fader, but pretty much any DMX tester will give this functionality. For example the swisson xmt-120A, or a wireless alternative would be a DMXCat, which is controlled via bluetooth from a phone/tablet (and can be used to check intelligent fixtures as well as generics. If you're wanting to be controlling a desk remotely then most manufacturers allow a networked device some amount of control, e.g. avo have their titan remote app, MA allow network control...
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.