技術中心
Technology
OptiStruct for Linear Analysis (7) Advanced Topics

OptiStruct for Linear Analysis (7) Advanced Topics

In this lesson we will discuss some advanced topics.  Topics include debugging, parameters, parallelization, run options, and output management.  Watch the video below to learn more.

 

1. Debugging Guide

In this section we will look at some general checks as well as some specific analysis checks.

 

General Checks (Part 1)

  • Verify for any syntactical issues using the run option –check .
  • Check for any issues in the HyperWorks Model Check tool in the Validate ribbon.
  • Verify if the units are consistent (mass, applied forces, etc. in the .out file make sense).
  • Plot animation (scale) will show obvious mistakes and helps in engineering judgement.
    • Missing boundary conditions or loading conditions
    • Material and property definitions
    • Element quality
    • Mass properties
  • Verify if the number of rigid body modes are consistent with the number of free components.
    • Perform a free-free (without any SPCs) normal mode analysis
  • Perform a check, to expose unintentional constraints.
    • Add the GROUNDCHECK entry and check for the elements which fail the test.

 

General Checks (Part 2)

  • Perform a check, to expose massless mechanisms.
    • Add the MECHCHECK entry.
  • Verify for force balance within the model.
    • Check the output from SPCF in the .out file.
  • Normal modes analysis shows issues – The number of rigid body modes should be as expected.
    • If free-free, there are six “rigid-body” (freq=0.0) modes
  • Force balance is satisfied (epsilon in out file) – Epsilon should be numerically zero.
    • Epsilon > 10E-9 may indicate trouble
  • Check load paths – use grid point force balance to “trace” loads.
    • Check stress contours for “consistency”
    • “Sharp” corners indicate bad modeling
    • Check stress discontinuities

 

Normal Modes Analysis Debugging

  • Try AMSES (EIGRA) and LANCZOS (EIGRL): are results comparable?
  • Try AMLS (EIGRL and PARAM,AMLS,YES) if available. 
    • AMSES and AMLS will catch massless mechanisms automatically. It also outputs and constrains those DOFs.
  • Try LANCZOS with PARAM,AMLS,2 to enforce constraint reduction.
  • Check if upper bound on EIGRL/EIGRA card is reasonable.
  • Try a non-blank for ND, V1 and V2 on EIGRL/EIGRA card. 
    • Try small V2, e.g. 10 Hz (models with low ND but high V2 might still fail due to too many modes)

 

Inertia Relief Analysis Debugging

  • When comparing two models with PARAM,INREL,-2 note that the models should:   
    • Have the same stresses and compliance
    • Not necessarily the same displacements

 

Buckling Analysis Debugging

  • Buckling is a very sensitive analysis. Even if the linear static run seems fine, small modelling issues can become apparent in buckling analysis.
  • Run normal modes and make sure there are no rigid body modes.
  • Try PARAM,SHPBCKOR,2
    • Order of approximation used in plate bending geometric stiffness for linear shell elements 
    • For 2, no transverse shear considered, only bending. Better for thin shells

 

Heat Transfer Analysis Debugging

  • Nodal temperature input is defined thru SPC w/o dofs.
    • CHBDYE definition
  • Temperature dependent conductivity only works with NLHEAT.
    • TABLEM1 in MATT4 defines multipliers, not actual conductivity
  • Conductance/area is required in PCONTHT since v14.210.

Debugging guides for OptiStruct Linear Analysis can be found on the Altair Community

 

PARAM

Parameters along with the parameter values are used generally in the Bulk Data entries to manage or control of requesting special features. These Parameters are addressed with the command PARAM and they are classified as Sub-case , Material, Element, Loads, and Output.

  • Different Subcase Types

  • Elements, Materials, Output

  • Linear Analysis - for more details, consult the Parameter Menu

 

2. Introduction to Parallelization

Below are a few definitions for parallelization.

1. HPC and Computer Architecture

High Performance Computing (HPC)

  • Leverages computing resources – standalone or cluster
  • Message Passing Interfaces
  • Advanced Memory handling capabilities
  • Large matrix factorizations, inversions, and manipulations across multiple degree-of-freedom systems


Computer Architecture

  • Node – Computing machine with single or multiple sockets. 
  • Socket – Each socket contains one processor. 
  • Processor – Typically contains multiple cores/CPU’s where computations are performed. 
  • Core/CPU – Computations are performed here. 
  • Thread – a single core, may be able to handle multiple parallel computations (via threads). 

2. Cluster

A computational cluster is a collection of nodes that are connected together to perform as a single unit. 

The tasks assigned to a cluster can be internally distributed and reconfigured by various software to nodes within the cluster. 

3. Node

A node is a computing machine/workstation/laptop within a cluster.   It consists of different electrical and electronic components, such as Central Processing Units/Cores, memory and ports that communicate with each other through complex systems and electronic pathways.   Typically, a node consists of one or more sockets, which further contain one Physical Processor each. 

4. Serial vs Parallel

Serial Computing

  • Solution is divided into discrete instructions.
  • Sequential Execution of discrete instructions in one logical processor.
  • At each point in the time-domain, only a single discrete instruction is executed.
  • Runtimes are typically high compared to Parallel Computing.


Parallel Computing

  • Solution is divided into sections, which are in-turn divided into discrete instructions.
  • Parallel Execution of discrete instructions of all sections simultaneously on multiple logical processors.
  • At each point in the time-domain, multiple discrete instructions relating to multiple parts are executed simultaneously.
  • Runtimes are typically lower than Serial Computing.

 

OptiStruct High Performance Computing

High Performance Computing in OptiStruct allows the usage of multiple threads within a single shared memory system (SMP) to improve speedup, multiple nodes in a distributed memory system cluster (SPMD) via a message passing interface (MPI) implementation for further scalability, and/or a Graphics Processing Unit (GPU) via the NVIDIA CUDA implementation.

The parallelization options are available for a standalone systems, clusters or GPU enhanced workstations.  High Performance Computing options include:

  • Graphics Processing Unit (GPU)
  • Shared Memory Parallelization (SMP)
  • Hybrid Shared/Distributed Memory Parallelization (SPMD)

SMP

Shared Memory Parallelization (SMP) is a parallelization technique that incorporates the usage of multiple threads (or logical processors) in a node to solve problems.   SMP in OptiStruct does not require different executables or the installation of separate components for message passing.

Note: The SMP runs can be activated using the -cpu/-proc/-nproc/-ncpu/-nt/-nthread OptiStruct run options for the Altair Compute Console or the script via the Command Line.

SPMD

Single Program, Multiple Data (SPMD) is a parallelization technique in computing that is employed to achieve faster results by splitting the program into multiple subsets and running them simultaneously on multiple processors/machines. SPMD typically implies running the same process or program on single or multiple machines (nodes) with different input data for each individual task. Typically, this combination can be termed as Hybrid Shared/Distributed Memory Parallelization, and it will henceforth be referred to as SPMD

Hybrid Shared/Distributed Memory Parallelization in OptiStruct collectively refers to application of shared memory or distributed memory across multiple processors/nodes using MPI.  OptiStruct SPMD can be run on either a single node or multiple nodes in a cluster depending upon the program and hardware limitations/requirements. SPMD in OptiStruct is implemented by the following MPI-based functionalities:

  • Domain Decomposition Method (DDM)
    • Level 1 – Task-based Parallelization
    • Level 2 – Parallelization of Geometric Partitions
  • Multi-Model Optimization (MMO) 
  • Failsafe Topology Optimization (FSO)

GPU

A Graphics Processing Unit (GPU) is a system which can be used to improve the performance of computationally intensive engineering applications. GPU Computing is a process which uses the GPU to execute computational intensive sections of the application and the rest of the code runs on the CPU.

 

Domain Decomposition Method (DDM) 

Domain Decomposition Method (DDM) is a parallelization option in OptiStruct that can help significantly reduce model runtime with improved scalability, especially on machines with a high number of cores (for example, greater than 8).

DDM allows two main levels of parallelization depending on the attributes of the model.  More information on this topic is provided in the Optional Learning Material section of this course.

 

3. Run Options & Output Management

Below we will discuss Run Options and File Management.  Use the pictures below to learn about certain Run Options in OptiStruct as well as the options available when you would like to run a single iteration.

 

 

 

Default Output Files

  • Protocal Files
    • .out file → provides a commentary on the solution process
    • .stat file → provides details on CPU and elapsed time for each solver module
  • Result Files
    • .h3d file → compressed binary file, containing both model and result data
    • .res file → is a HyperMesh binary results file
    • .mvw file → HyperView/HyperGraph session file to open results in HyperWorks desktop
    • .pch & .op2 file → Nastran Punch format and Output2 format
  • html Files
    • .html file → contains a problem summary and results summary of the run.
    • frames.html → opens the .h3d files using the HyperView Player browser plug-in
    • menu.html → facilitates the selection of the appropriate .h3d file, for the HyperView Player browser plug-in

Control Cards

Control card entities create solver control cards such as results file I/O options, CPU and memory limits, and others. Control cards can be created by using the Solver Browser:  Solver Browser > Create > Cards > More:

A few recommended control cards are shown blow:

  • OUTPUT controls the format of default results output
  • Use result keywords like displacement/stress for detailed control:
    • Global or subcase dependency
    • Additional output-Formats like .pch file•Output only for a subset of nodes/elements/properties
    • More detailed output specific control like stress type or stress location
  • Usage of other output control commands like FORMAT or PARAM,POST is possible, but not necessary/recommended as OUTPUT is more flexible.

An example is shown as below.

Outputs:

  • Disable .res file (use .h3d instead) and .html files
  • Enable .op2 file including model data
  • Reduce stress output to von Mises
  • Displacements of node sets to ascii files
    • .pch file for load case brake (node set 1)
    • .disp (opti) file for load case pothole (node set 2)

In the HyperMesh client we can set these through the Entity Editor, Load Step Browser, and Cards Browser.

Optional Learning Material

Want to learn more? Use the content below to get additional information on the advanced topics. Get the most out of your knowledge and challenge yourself by reviewing the provided supplementary material and exercise.

  • Advanced Topics Guide (Advanced Topics.pdf)
  • Advanced Topics Exercise Model & PDF (ex_opt2.pdf, Opt2.zip)

 

Congratulations! You have finished the course. Thank you for attending! 

LINE
TOP