Problems with GPU simulation.

  • Sonia Kong COMMUNITY MANAGER Comment actions Permalink

    Hello von Buelow Richard,

    Thank you for contacting us!


    For your information, GPU simulations have the advantage of fast speed, but it is difficult to handle conflicts between layers. Please use GPU simulation for fast garment-making work. After that, please change to use CPU simulation for the better completeness of your work.

    For more help, may I know your GPU device specification?


    I look forward to your reply soon :) Thank you.

  • von Buelow Richard Comment actions Permalink

    nvida gtx3090. Too bad I can't really use cloth simulation for GPU. It doesn't handle layers or self collision,... Which is why I use marvelous designer haha. Feature isn't ready for primetime thats for sure 

  • Angel Angel Comment actions Permalink

    There maybe needs to be some appreciation and understanding about cloth collision (or any collision) computation that is a time based sequential random event. And it starts with - all (and I mean all as in 100% quality computations of collision random events are run on a single CPU as the gold standard) - why, because you cannot predict the next random event (time step) before the previous computation. This is why all engineering high end 'quality' computations are run from single events in sequence until there is enough statistical data to show some common behavior. At that point, a computation like cloth can be statistically , 'chunked' into areas or vector travel (in euclidean space) that show some trajectory that matches a statistical probability to exhibit a certain behavior .... which means chopping out real event simulation and lower fidelity 'guessing' on where areas (chunks) of cloth will be at a specific frame step (interval in time).


    This is still inherently a sequential task unless administratively distributed at the level of 'chunks' of cloth flowing according to the algorithms best guess as to where it will likely be. So in terms of maths (which dictates the fidelity of any random collision event) will be inherently sequential unless statistically chopped into probable or likely vector trajectories - at linear time steps. If you then want to distribute that across many cores in parrallel you all of a sudden have the (chicken before the egg problem)  ... which means in order to distribute a random time sequence event you have to lose something, chop out real events and use statistical algorithms to approximate possible likely hood of a cloth chunk(s) small and large areas into a likely position. The more you distribute this sequential even in parallel the further the simulation moves away from a gold standard linear time based calculation and towards a statistical best guess (which means more error correction, possible screw ups in the collision guessing). Maths and physics for random events means you cannot guess 100% where a random particle will be before the event takes place - otherwise we could predict the lottery balls bouncing around in an enclosed chamber before the event takes place, or shortly after they load into the plastic sphere. Maths is not there yet - maybe in 3030.


    So for now and into the distant future splitting an inherently linear maths problem across many cores in a parrallel nature will always be fraught with inaccuracies. So it is absolute nonsense to think you will ever get a gold standard simulation of what is a linear event in maths - completely distributed in parallel without losing fidelity in the end result. This is where human expectations need to re-calibrate. Cloth and particle collision is not like ray tracing, or distributed rendering, it is substantially more complex. And if you take into account that MD is perhaps using the most advanced cloth simulator on the planet for cost/time you can start to put into context that 1) it is the best value for time quality simulation on CPU and 2) The GPU simulation is noted in the manual not for a final quality simulation, but for general work-in-progress speed when arranging your garment. This means MD appropriately positioned the technology relative to where maths is now in history, by bringing the best possible outcomes in collision technology to your desktop at a reasonable price.  No small task.


    However if the 'human' component in the simulation workflow - never adjusts there thinking about what to expect for the next 50 years in collision maths advancement - they might be continually disappointed. Cos it's not going to change the quality, only the approximate task distribution of a 'faux' simulation that attempts to trick the eye and mind - using statistical best guess verse accurate sequential (linear) computation that more closely follows a real collision (random) event. So until A.I. learns to adapt more algorithms (rather than just one or two general equations) distribution of random events across many cores for linear time steps will be a 'hard' maths problem to solve - it's an elastic problem.


    Only so much of a linear random event can be distributed until the result degrades with time based steps - a fact on the limitation of maths in the 21st century. You cannot guess the outcome of the lottery (using GPU's or CPU's)  before the event has occurred. At which point no event is chance, it means we will all live a deterministic existence and - boy will that be a world not worth ... the roll of a dice!

  • von Buelow Richard Comment actions Permalink

    Appreciation? That the task is difficult... ok.   It still isn't very usable at all unless your building a one layer poncho. Then it would probably just overlap and fold in on itself.  If I bought a Tesla I really don't care how difficult it was to build and design it, I just want it to work.  If this was an open source project like Blender I would say nice try cool. This is a product billed MONTHLY. Trying to equate doing a cloth sim to guessing the lottery numbers is a big stretch for me. These are all connected points on plane. Especially layer collision not working thats a big oooof. 

    If I "never adjusts there thinking about what to expect for the next 50 years in collision maths advancement - they might be continually disappointed. Cos it's not going to change the quality,". Sad to hear the core of the product isn't going to get any better in 50 years. Adjusting my expectation means not using the gpu simulation, as it creates more issues than it solves. 

  • Angel Angel Comment actions Permalink

    Well the core of the cloth simulation will get better, but it will take time to average out all the quality simulation possibilities when splitting linear tasks over many GPU cores, that's a complex computational task that is having great strides made in fidelity, so it is most likely just around the bend, however single linear computation is still and will likely always be the gold standard and rock solid reliable. So in future what passes the human eye/mind test (like A.I speech and conversation) is all about semantics and is subjective to each persons experience of tech. So will a machine beat a chess champion, yes, will a machine model cloth better than you or I, yes in time. So contextually where is that point is what you need to think through - it's today if you want the answer, but it requires certain steps. In future these steps will get faster, higher fidelity, almost invisible ... but that is 'then' and this is 'now'. Deal with the present, and know the cross over point, that is how you place what you do in adding value to any given profession.


    GPU sim is actually a huge improvement for the MD (WIP) workflow if you use it in context with how they state it's use in the > Manual < and don't use it out of context. For layered garments it's massively useful when used right as there can be a whole lot of assembly to be done - which takes time, now much faster.


    The fact is - Marvelous Designer represents the best leading edge value for money in cloth simulation on the marketplace in history at this desktop accessible price point.


    Blender is a good application, sure, but it's not the sole solution to all cloth sim outputs. I use Blender in context with fashion, not because it's purely open source but it's actually best at as a flexible fashion pipeline for what I do. Used to use Modo but found blender better and more adaptable with faster modelling tools.


    GPU computation helps with all the > work in progress (WIP) tasks that need to occur on a digital garment build and that is a huge time saver in the workflow, arrangement, sewing, drafting etc are all speeded up, simulation drafts and layer positioning, etc. But when it comes time for that final pose and all important beauty pass final drape simulation, use the CPU as recommended in the manual.


    Appreciate the finer points when researching how stuff works. That's the real world - research is generally free, you just need to make time. 


    MD is great at simulating 'poncho's by the way - below. You just need to keep your six-shooters out of view. Done in under a minute in MD - that would take ages in Blenders cloth sim to get the same fidelity (if ever). So know when and where you are packing a good 'contextual' argument.



    You are using one of the best technologies on the market. So learn how to master it and use it appropriately to get a return on your effort.


    Think through where your argument might be truly heading. Into the pond of reflective thought.





  • von Buelow Richard Comment actions Permalink

    I don't want / need zen advice from you lol what is this response. Telling someone to read the manual isn't helpful. I didn't research GPU cloth simulations why would I?  I have better things to do with my life. I was just wondering if there were settings I could change so its actually usable. I just expected it would work similar to CPU. This isn't a career just a hobby. It obvious this program needs a lot work or a ton of time to learn all the terrible little idiosyncratic issues. Don't waste your time typing out another 10 paragraphs zen master I wont read it. 

Please sign in to leave a comment.