It will always be arguable what could have been done within those confines.
And I respect you bringing up Coen, Fincher… which brings up a good point. There is a line at which the conversation becomes “What I think you should have done is this.” And there’s definitely a gotcha there because at that point argument becomes subjective.
It becomes more about taste. About pineapples in pizza.
Well yes and no. If IOI went back to letting Platige do it, the argument is "That money could instead go into further development of Glacier’s cinematics mode.
Clearly it is a budgetary issue, a time issue, and the limitations of the engine issue. If you watch the Hitman 1 cutscenes, as you say, they have a great cinematic quality to them. And even the Hitman 2 still images you could often see they have an eye for detail. The issue is creating effects, camera systems and lighting etc within the engine itself. If you also notice the cutscenes in Hitman 3 are a lot shorter and get to the point. It probably doesn’t help having proper close ups and medium close ups because the facial rigging etc isn’t at the quality needed.
Honestly, given what game Hitman is, i’m not exactly asking for the best of anything. I mean, they’ve always has good art direction. Not a single level looks bad in WOA
The hitman 2 cinematics were meant to reflect a stylised comic book, though clearly budgetary issues was the reason for this approach. Very thematic, considering H2 constantly references it’s comic series (more than even I realised).
I also feel they are not far off. Also from a hit making standpoint they are not far off. HITMAN 2 supposedly made back its cost within week 1 of release.
Which is probably why I felt a little disappointed that the cinematics did not really improve past the stiff quality they had in Haven.
Probably what we can look forward to in the future of Glacier, 007, and HITMAN:
This is UE4, but this is more a case of “everybody is doing it”.
EDIT: Hold on… This will be an online tool that exports directly to Maya? So that means IOI can just directly use this to insta-make their actors in the future. Hmm… cool.
my main issue with the cinematics is that very often the facial expressions of characters will just jump randomly. i believe its in the last resort ending cut scene, there’s one part where 47’s eyebrows start up and then teleport down with no smooth transition. and it happens to Grey’s face when him and 47 are talking in the cutscene after dubai(they also look really um, uncanny valley there, due to lack of facial expressions). basically its great ioi was even able to come up with animated cutscenes for H3, but the faces really take me out of it sometimes
This makes no sense whatsoever - there is never “everyone is doing it” with game engines - they all develop individually and focus on different aspects in order to be competitive with one another.
IO Interactive use a bespoke engine and quite possibly have their own metahuman creator along with many other considerations to take into account like rigging configurations, cloth effects, etc.
Saying this is the future for IO Interactive is like posting a picture of Mercedes Benz and saying it’s the future for Porsche. One thing has nothing to do with the other.
The main excitement over the Metahuman creator is that it means all the studios who use Unreal 4 are now going to be able to produce figures, so smaller scale shops will be able to put more actors and a wider variety of actors in their games because they won’t have to spend fortunes on 3D modelling them. It’s very exciting to people who are wanting to see more diversity and better depictions of different kinds of people in games.
Big shops will still be doing it their own way because they have custom considerations - lots of the smaller shops will too.
It’s complicated and it helps no one to just post “I found a thing on the Internet and assume it applies to everything”
Meaning: “This is an area of recent development in game engines”.
Also you said it yourself. It is a tool/technological development meant to reduce the cost economy of high quality biped actors.
In addition if you go down the “We are really cheap and lazy” route the Metahuman system as developed by DynamiXYZ will be primarily integrated to Maya as well as UE4.
What part of “integrates and exports to Maya” did you not understand? In a previous post you mention that IOI did not prioritize the development of Cinematic level biped actors. In that situation, a tool like this - even if it’s just IOI using Metahuman itself and exporting to Maya - would be a big short cut for them.
You say IOI probably have their own biped generator. Fine. Then on the basis of HITMAN 3 we can only conclude that either Glacier’s own mode will “reach this level eventually” or that the onset of Metahuman - with its export capability to Maya - can replace what they were developing and they can focus elsewhere.
So yes we should look forward to this level of output from the next HITMAN or the 007 game.
IO Interactive are already heavy users of Autodesk software. So the Maya integration would be great for them. At any rate, the level of quality shown in the demos I posted was definitely above the level HITMAN 3 attained. So this is a stretch goal.
I think you are not considering the proper relevant possibilities.
You bring up an interesting analogy. Mercedes Benz debuted one of the world’s first Anti-Lock Braking Systems in 1978. Porsche was one of the companies that immediately knew it had to do some catching up. Porsche had been working on ABS since the 1960’s but briefly abandoned it until Mercedes unveiled their version.
The only problem would be that IO should make everything with their own assets, like hair and wrinkles, at which point just doing it the old way may be faster / cheaper.
Cheaper yes… Faster… is arguable. Better is another arguable. More precisely: “Better within a certain amount of time”.
HITMAN 3’s cinematics definitely look like the product of compromise. And depending on where those compromises were, the gains for a 3rd party tool like Metahuman can vary. But usually the odds are, any tool that allows you to get an asset instantly is a good head start. The better the tool is, the less time and effort to rework that asset.
For example if Glacier’s biped generator had only limited face sculpting, you’d have to export to another app just to get that done. Depending on expertise a different artist may be needed to get that done as well. And then depending on what that artist had to do, you may need to come up with another process to bring the reworked asset back up the chain to Maya and then to Glacier.
So you compare at worst, something like the above, and on the other extreme the promise that you can just “drag sliders and icons” and you get the same result without any further export/import or even the talent of another artist.
Auto-rigging is another key area like this and so on.
But the main point I am making with Metahuman is, much like when Mercedes unveiled ABS to the world. Even if you are a different automaker. You see this and you think: “That has to be the future.”
It’s a question of whether you are close enough in capability to gain that feature internally. Or if you just have to accept time has moved too fast and you consider if you can purchase the capability since it’s on the market now (or will be very soon).
No. A recent development of The Unreal 4 Engine which is primarily of interest to the industry discussion because it widely licensed by smaller shops and large shops that find it appropriate to the work that they are doing.
It has no relevant at all to the vast majority of engines in the industry including, most importantly, The Glacier engine. The one you put in all caps in the toipic title.
I don’t know why it is so hard for you to stop spreading outright misinformation and then doubling down on it like you’re an authority.
I understand it perfectly. What is clear that you don’t understand the basics of any of what you are talking about and are continuing to use the forum to spread misinformation. I do not want the forum being used for that, I do not want to have to deal with the mayhem that occurs when people mistake your misinformation for facts and start making outrageous demands.
Exporting directly to Maya is not even vaguely useful or interesting, it is downright basic shit which 3D scanning has been able to do for decades and nobody cares about because the real work is all the other aspects. Including, and not limited to:
Stylizing in a consistent manner that works with your themes and other assets
Rigging in a manner that is consistent with your planned animations
Polygon counts that are consistent with your system resource use
Rigging effects that are consistent with your system resource use
If you are going to make a game with a different style and have to spend a lot of time redoing all the figures to change the overall look - it’s not helpful.
If you are going to make a game with different approaches to animation and so need to do all the righting structure from scratch - it’s not helpful.
If you’re not intending on having figures that look like they came out of this metahuman creator, and you’re planning on having them in a different engine, and you already employ a large team of people who have experience in making crowds in the style you’re going for - it is utterly worthless.
I am in fact aware of not only the possibilities but the complications - which is a more important factor one should think about before announcing “we’ll be getting” because fan expectations cannot be put back in the bottle.
It is wildly irresponsible to refer to your imagined possibilities as “we can look forward to”
If you’re just going to use this thread to post misinformation and claim that its facts - it’s going in the trash. That is not negotiable.
My 2c: They’re fine. Much better than H2’s slide shows, much worse than H2016’s cinematics, but then they’re some of my favourite cutscenes in any game, a shame they were unaffordable without Squeenix money.
I actually kind of wish they’d have done them in-engine really rather than having sub-par CGI, but then I’m very sensitive to video compression so for a decade or more it’s been very rare I’ve thought game cutscenes look better than in-engine graphics, especially when they’re designed for console versions not PC on high settings (ironically H2016 was one of the few exceptions). I think they did some of that, I mean the whole intro section in Berlin is basically an interactive cutscene, there’s no other reason for the petrol station to exist. Meeting Diana in Mendoza could likewise have been done in a cutscene, but IO probably recognised in-game looks better.
H3 Cinematics are in-engine and real time. There’s a video my nephew has (but hasn’t uploaded to YT) of flash phones and Napoleons being triggered while Diana and 47 have their realtime cinematic at the start of “The Farewell”. Because the stuff was near Diana, one of Tamara Vidal’s bodyguards ends up listening in real close as 47 and Diana talk about taking Vidal out - which is hilarious.
And then there was the explosion.
I see no reason why the other cutscenes would not have been realtime as well because of resources. It is possible they are not. But the only real difference at that point is whether you will use primary storage (Memory and GPU for realtime rendered cinematic) or secondary storage (video clip played from hard disk with minimal Memory and GPU usage). Both are clearly from Glacier. Secondary Storage would also add more heft to the game install size compared to running it real time.
The motivation to render out the images ahead of time and assemble a video clip would be if you planned to do a lot of post-editing in After Effects or something to make the video look better or your plan was to run all the cutscenes on some RTX 3090 rig and have them all recorded with all settings on “Ultra”. But that’s not always wise since at last attempt, game engines (even UE4) were incapable of giving you separate passes (Colors, Gloss, Transparency, Mist/Atmospherics, etc.) - which is what you need so that you can make the most of After Effects and all that video really does for the game is just make your install size fatter.
So based on that, I would say: Everything in H3 is realtime and basically running “live” on your PC/PS4/Xbox. And yes it can look a bit like “bad cgi” because at the end of the day you’re going to see it on a TV screen just as you would any movie (in those parts of the game anyway).
I’m pretty sure they’re pre-rendered as you can see the compression. Also if you load them up from the menu there’s no loading time, I think because they’re not loading in assets just queueing up a video. The cinematic at the Farewell is one of examples I cited of it being done in engine.
More information would be required. But the source either way is the same. The assets are clearly in Glacier either rendered realtime or prerendered in video form.