Jump to content

Anothertom

Regular Members
  • Posts

    25
  • Joined

  • Last visited

Previous Fields

  • Member Status
    Working in the industry
  • Current Employment or place of study
    Senior Digital Technician for White Light Ltd
  • Professional organisation membership
    -
  • Full Name
    Tom Andrews

Contact Methods

  • Website URL
    http://www.tomandrewslx.com

Profile Information

  • Location
    London

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Anothertom's Achievements

Climbing the roster

Climbing the roster (3/14)

  • Reacting Well Rare
  • Dedicated Rare
  • Collaborator
  • First Post
  • Week One Done

Recent Badges

  1. To answer the questions you posted: 1. Yes, use a camera cue. 2. A capture card to take a video feed from the powerpoint pc that is supported in MacOS and QLab. QLab can access video capture devices through camera cues, you can then apply the same geometry and effects to them as a normal video cue. If you upgrade to QLab 5 you can utilise NDI screen capture to send the video feed over a network, but for various reasons a hardware capture card is a better solution. Control wise you would be sending TCP commands direct from QLab, not OSC. When you set the destination you can tell it what protocol it's sending. If you struggle to get it work directly, you could send OSC from QLab to companion which will probably have a nicer integration with the projector. This is definitely a cheaper option, but a more robust solution would be to put a presentation switcher between the PC and the projector. That would allow you to switch between multiple sources cleanly, and utilise things like still stores PiP effects and being able to freeze or blackout the signal. It's also a more reliable device to be the single point of failure than either a PC or a Mac. Assuming your projector is 1080p or lower, you don't even need anything particularly fancy or recent. Something like a PDS 901/902, Analog Way Pulse2 3G, or similar Roland switcher.
  2. Equally a cheap (like, really cheap, 5 year old, starting to die) laptop/nuc/pc will do this in software. (QLC+ or magicQ spring to mind as free options that I know will do this.)
  3. As an alternative to playing around with MSC (which I do admit I only ever use because I like being able to say I'm using it and not because it's ever implemented well), you could use an OSC router to take in commands from your lighting console and send the relevant commands to each end-point you have.
  4. While you may have some equipment to hand, you don't have the right equipment to hand. You're thinking of your Atem as a piece of content, but it's just a switcher. Your content is what you have upstream of the atem. In this scenario you need to replace your atem with something more suitable to the job. What you need is a piece of software or hardware able to handle the full resolution of your LED wall and map inputs to sections of that full canvas, either a software media server/playout software or hardware presentation switcher. In this situation I'd probably be speccing a PDS-4k or an S3-4k, if the requirements really are that simple. Your Atem mini isn't going to cut it. If you think even just on the lines of pure resolution. Assuming an LED product at 2.6mm, which is pretty much standard for a corporate setup, your full resolution will be 4128x1376. Your Atem's output is at 1920x1080, which means you're scaling up by 2.15x to fill the width, which also means the usable height of your output is just 502 pixels being stretched on a 4m high wall.
  5. Assuming you come back to this post... The first thing I would suggest is to just contract a production company to provide the whole solution for you, the fact you're asking this question means you're probably not in the position to deliver it. Not going to plug anyone in particular but there are plenty up and down the country that can very easily deliver this LED solution. Focussing on your content workflow question, an Atem switcher is not the best solution. They are purely designed around live camera workflows and fully 16:9 systems. You would be better off with what's known as a presentation switcher (e.g. barco PDS-4k or similar). These support a much wider range (including custom) resolutions on both input and output, have full compositing system that is further separated from I/O, support multiple input formats and connections. An alternative (on a tighter budget) would be to run as a software solution with video capture; vmix, resolume, Qlab, and many others could be used to do this very easily. Some people will suggest putting scaling between the content and processing to force the content to fit the resolution of the LED, but that only results in increased latency and warped content. There's no need to do this in the majority of situations.
  6. What trouble are you having? You just download the installer and put it on a drive. Can you download it to a different file path then copy it across? have you tried a different drive? What storage format is the dive you're using?
  7. So assuming you feed an 16:9 source to the projector and a .39:1 lens, that gives a rough estimate of 2.08m from the centre of the lens mirror to the surface, which is or isn't terrible depending on your space. And gives you 4 of the 16 to space the two screens if you went with that, or two screens that can be connected to make a single larger screen, for more flexibility. It's all about ratios, the further the projector from the screen, the bigger the screen can be. At the very least hiring for a proof of concept before purchase. An AV hire company may also be a retail partner, who can assist with competitive pricing as well as any servicing you might need down the line. Having just re-read everthing, I now realise you're not talking about theatrical production, which possibly gives more options for using longer throw lenses. But that would be dictated by each individual setup, rather than being able to say X lens will always be right. There are zoom lenses, but generally only once you get towards 1:1 or higher (i.e. more throw than projection width). You can probably also get away with much smaller projectors, as you aren't directly competing with lights pointing at the screens and it won't be such an issue if the content gets blown out by a light every so often. Possibly some 5k's with small zoom lenses.
  8. Some information lacking, for a useful answer to your questions: how big are the two screens you would like to use, what's the total width, and what gap would you have in the middle (might help if you draw this out)? Are the screens flown or ground supported? And what throw distance would you have for the projection? An ultra-short throw lens is typically around 0.39:1 (so .39m throw for 1m width, and will be a fixed ratio. I'd suggest contacting any friendly rental AV companies you use for specifc models, you definitely don't want to be purchasing for a one off. Your suggestion for control and playback is fine, I'd suggest mapping the master opacity to a DMX channel so you can enforce blackout if something goes wrong. There are other ways you can avert disaster, but that's a good start.
  9. I'd probably have suggested finding a 2011 macbook pro, upgrading the HDD to an SSD, and using that. Reported compatibility up to mac OS 10.11. But going with a more modern product is probably better in the long run, and you now have a spare focusrite.
  10. Or something that'as not EOL, like magicQ, which will do 64 universes for free and get updates. Neither dot2 or magicQ will allow usb devices to be used for output directly. (You could technically use qlc+ as a artnet/sACN to usb layer, but that's a silly amount of risk to avoid buying a one output artnet node)
  11. Anothertom

    Visualisers

    Personally I'm a big fan of capture, but as you've ruled that out let's explore some other options... L8, as you mention, has some vey cheap options, but with limitations. For the €88 option you have fixture limitations, no paperwork, limited input universes. A nice helpful table of what you get with each license: https://l8.ltd/m/ Depence2 is a very pretty visualiser, can be used to create some really good looking renders. Also not much cheaper than a Capture Symphony license. Lightconverse: also not cheap, but has lower tiers as well. Vectorworks Vision: similar price, but can be a one off purchase, very "industry standard". There are loads of really terrible abandoned projects where people have thought "lets build a visualiser and make it free and amazing", and then realised that making something usable is quite difficult. Also: https://xkcd.com/927/ What's worth considering is that many lighting consoles come with some level of visualisation. Avolites (capture), MA (MA3D or built in on MA3), Chamsys (magicVis), are freely included with the consoles. You can also use merging to use these visualisers with other controllers by sending data through the native console (or pc based solution). Using MA3D or magicVis with a different data source is very simple to set up and completely free. Avo would require licensed hardware of some kind. Really depends what you're going to use it for, and if you're going to need to produce files other people can work with.
  12. Don't go down the hyperdeck route, this isn't what they're designed for and will only impose restrictions on what you can do. The solution would be a mac studio running qlab. Up to 5 outputs, which will do what you're looking for, but I would suggest (as you're staying at 1080p for now) also go down the FX4 route. The mac will be much more reliable when using fewer physical outputs, and you don't need to be as specific when selecting adaptors and outputs. HDMI for a GUI head, then thunderbolt to Displayport to an FX4 SDI, SDI distribution to projectors.
  13. Yeah, there isn't really a way to do rentals, other than of a person with a license. As a person with a full capture license, and proficient with it, If you would like some paperwork or pre-vis file creating; feel free to PM with more details or to discuss what you need.
  14. As sameness said, set everything to the same frame rate and same scan mode. There's generally no need to be running interlaced anywhere in a live event scenario, so my preference would be either a full 50p or 60p* depending on region. You should then also have a sync generator in your chain and feed that to all cameras, your switcher and any other routing that can accept a genlock signal, and this should also be set to the frame rate your kit is operating at (technically the other way round, but set it all the same). The videos you shared, that's definitely tearing, but looks rather severe. I'd suggest that you verify the output of the camera on either a field monitor or through a format converter and into a normal screen. You should also check that anything you're viewing the output on is receiving and reporting the correct signal. You're right that the atem mixers process the multiviews separately, they can't really be relied upon for critical viewing. *Exemptions apply
  15. I find this an interesting development for a lighting desk as media servers (or at least disguise) have had the 'selectable output cards' for a while, which is great as you can change your outputs for the specifics of the job. I think it's definitely driven by the intention of heavily utilizing network protocols for both output and control, which the inclusion of two 10Gbps SFP's and 4 1Gbps ports onboard is geared towards (which is a mental amount of networking capability for a lighting console). I'm surprised there's no permanent physical outputs at all, and also that you need to use a bay each for midi and LTC which should probably be built into the hardware, although OSC is much more of a thing these days, and other network based control is available so at midi at least isn't such a deal breaker. I'd guess a 'standard' IO config would probably be Midi, LTC and two DMX modules.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.