Slow performance and stuck on Synchronization

  • Support Agent Comment actions Permalink

    Hello Giovanni Farina,


    Thanks for getting in touch with us.


    According to your concern, could you please try lowering down the number of CPU in use following the instructions below? Due to the Marvelous Designer/CLO simulation core design, the software may not be performed well on a CPU exceeding 12-Core. Setting to 12-Core or less in Simulation Properties will provide a clear performance improvement.


    Please follow these steps below.

    1. Go to 'Preferences' - 'Simulation Properties'.


    2. Please change the 'Number of CPU in Use' on the 'Property Editor' window.

    If you still have the same issue after lowering down the number of CPU in use, please feel free to get back to us :)

  • Angel Angel Comment actions Permalink

    You are working incorrectly, your hardware is fine and the application is fine.

    When you work, and many artisans have this problem with collision based mediums, you need to plan your workflow, as it's a staged process, and the friction you are burning is fps in time step simulations. And this is what flummoxes nearly all artisans I cross over with.


    Don't and I repeat this 'Don't' set your simulation mesh density too low (5mm) too soon when creating large areas of cloth, and then also recognize the domain space limitations. Yep MD is for clothing, but here you are making a big bed spread area of cloth, so yardage makes a massive performance hit, relative to the cloth in a average garment (yardage or cloth area under computation). And the same goes for furniture, massive cloth yardage usually gives a exponential fall off in performance.


    So I am expecting now you are starting to see the penny drop. Expectations that this machine that goes 'ping' tasked with computing the upper limits of a standard garment yardage is now be pushed beyond what is atypical. So re-calibrate, both in the medium (bedding and furniture) and with the workflow - less looking at your work at small particle distance (early on) and more general layout and arrangement to get things into position at a reasonable mesh density.


    When you make bed sheets you don't need to have it at a high mesh count at the start of the project, that is bonkers as you get the exponential simulation penalty at the beginning of the project.  So use a coarse mesh and a higher particle mesh distance to edit and assemble your cloth into a basic drape/pose arrangement. Use that time and scale to make all your edits to your patterns and trim detailing (where the simulation is fast > fps under GPU ) Then when you are happy with the basic layout and composition switch off what you don't need, and work on one piece at a time > then reduce the mesh particle density to get that nice detailed creasing, freeze and move onto the next element, and then finally simulate the 'beauty' pass in CPU at highest quality. 


    DO NOT ever make an edit to a high density mesh, should be your mantra. Only do it to a fast low poly simulation. Can you now start to see how you maybe have the workflow backwards. The goal is to try and keep the entire assembly (sheets etc) to roughly the mesh count off a average garment. WTF you are likely thinking now .... but yep ... after 8 years doing this that's the goal, and to do that at speed you need to pull out all the simulation tricks. 


    So think - do I need all the simulation frequency high detail at every point in my simulation workflow > no you don't. That's just mucking about, what you want is a great balance of simulation speed and control down to high frequency when it matters and not before. So if your design can do with depth maps or normal map frequency you can rip that out of the workflow - then do it. Because that is going to save you one heap of project time. Most designers get this totally wrong, and there are some incredibly bad tutorials out there propagating this flaw in the workflow and people are now taking this accelerated (tutorial magic in a workflow where they chop out the 3 weeks it took them to create a whizz bang tutorial and eye candy ) as gospel, it's not and it's more than often (than not) really bad process. Know how to deconstruct frequency in the drape detailing before you start and then implement a workflow and design strategy for that, relative to your project. That statement may fly over your head at first but that is what is required at the get go. Maybe completely missed as part of the 'wing-it' will cross that bridge when I come to it in the workflow tutorials (that took a month to create). Start by working out what number of simulation polys you have in the bank to expend - that's the cash you have to expend, where the rubber meets the road on a productive job. Don't spend what you don't have, as that causes massive performance fall offs. So some basic maths at the beginning of a large cloth area project can shed a lot of light on how you need to attack the problem at stages across such massive yardage.


    When I have more time I can write down some performance tips.

    Crease maps - you can make sheet crease maps and add in frequency level detailing from a bit of tissue paper and blend in your your work.

    Carrying out an project 'wrecky' to determine what additional MD assets you need is part of the workflow.

    Soft verse crisp, blended normals, grainline, will they work together as you composite frequency at stages?

    Do you drape the detail or project the frequency?

    Projected detail using depth and normal maps

    Do you transfer frequency ?

    Do you apply a sub component approach to the build  (non-destructive approach that using inputs rather than costly simulation frame steps)

    The goal should be under 1 hour for a fully detailed bed and drape of sheets in detail. 

    Prepare crease frequency maps with the detailing associated to your custom design details you will need later in the process flow. Set aside as a separate simulation file that can be made into a 'trim' map that makes hundreds detail variations, from one 5 minute simulation. Set that to a grid that fits your final top quilt or sheet (more maths and forward thinking). I have not yet seen a decent tutorial on how to make bed sheets in all their glorious detail with MD.

    Seam trims details as image maps that match patterns.

    Silhouette (below) brocade or material thickness = drape implied physics, but what do you split out and keep in at the simulation stage verse the texture stage.  > the camera angle of the planned final shots will likely tell you this. Think forward to the final models performance needs. Close up or faraway?

    See how it goes - most of all this gets dropped from a project as the tutor launches into a pithy 1/2 hour video that compresses a months work into a 30 minutes of total misguided thinking that belies what can be broken out and why. So plan your project.

  • Giovanni Farina Comment actions Permalink

    Thank you very much for the time taken, it explains a lot. I never went soon on 5, I used to start at 20, sometimes at 30 but I had an issue with pattern penetrating the avatar, after reading your detailed explanation, I guess the main problem for me was to go from arranging the fabric at 20, to finalize at 5, fabric by fabric, so when finished with the first one and already set at 5, I will then start the new. This time I will try to create them all first, and only then finalize them one by one.

    As for final fine details and extras, my workflow it's to bring them in Zbrush add details, then Substance Painter to bake the high poly details in the low poly mesh, and back to Maya for the rest. I will change my workflow right away.

    The only thing I'm not sure I understood properly it's when you wrote:

    "Set aside as a separate simulation file that can be made into a 'trim' map that makes hundreds detail variations, from one 5 minute simulation. Set that to a grid that fits your final top quilt or sheet (more maths and forward thinking)"


    By the way, thanks again, you helped a lot.

  • Angel Angel Comment actions Permalink

    So the MD post was interesting as the number of CPU cores you assign can actually make the performance improve - I also lower the core number, sounds odd but that actually does work, why I often use the windows priority and limit CPU cores across some apps when using MD - so that is sound advice, I didn't mention.


    However on large areas of fabric you are still going to get performance fall off if your workflow for these types of furniture products like fabric covered sofa's and double beds ...  if you don't deploy a stepped, planned workflow. In 8 years of doing furniture and bedding, that's still going to be a slow point (the choice of workflow) and also the fastest way to speed up your project build outside a CPU resource tweak. Workflow is possibly the single biggest overall performance upgrade you can freely make to cut project times on simulating cloth yardage.


    In windows system you can assign your workstation to handle CPU resources alongside the software, that can also balance the software performance across the system and any additional apps you have open as you work. I will often have 3 apps open as I use MD, as I push the project in real-time across a addon bridge I created in python, so in that case my system balances the resources according to the software priority as I work. (You can set that in windows). 


    More efficient workflow wins the day hands down.

Please sign in to leave a comment.