 
      (Updated on 2013/2/13)
        
      
       
	
        ID:O-1
        Principal developer
        Shu TAKAGI, Team Leader, RIKEN
        
        General description
        
        ZZ-EFSI aims the fluid-structure interaction analyses to analyze and predict behaviors of a soft human body for medical application
        
        Computational model
        
        - Finite difference modeling. The solid structure and fluid are identified with a density function (or VOF).
- The solid structure and fluid are all incompressible and described with the different stress terms
Computational method
        SMAC method、4-Color SOR
        
        Parallelization
        Domain decomposition method
        
        Required language and library
        FORTRAN90, C++, MPI, OpenMP, SPHERE
        
        Status of code for public release
        Source code is available through ISLIM download site
        
        Maximum computing size in present experiences
        
        - Number of voxels:960x960x960 (884,736,000 elements)
- Parallel computing with 8,192 cores
- Required memory/disk storage: 400GB/8TB
Expected computing size in K computer
        
        - Number of voxels: 40000x4000x4000 (640 billion elements)
- Whole body high resolution blood stream analysis, which data is generated from the MRI images in the demonstration experiments
- Required memory/disk storage: 300TB/6PB.
 
    
   	
   	   
      Figure 1. Whole body blood stream simulation including the material properties, such as in blood vessels.
   	  What does the code enable?
         
         - Blood stream computation covering large blood vessels to capillary vessels including red blood cells and blood platelets
- The simulation is driven directly by the medical data like CT scanner data and MRI data. This enables personalized simulation promptly after the doctor's diagnosis.
 
    
    
    
              
  
                                
    
    
     
    
        ID:O-2
        Principal developer
        Kenichi ISHIKAWA, Associate Professor, University of Tokyo
        
        General description
		
        
        The code analyses the spatial dose distribution in a whole body from the medical-purpose heavy particle beam by Monte Carlo method for voxel data. The newly developed domain decomposition Monte Carlo technique allows very large voxel data treatment
        
        Computational model
        The domain decomposition
        
    
        Computational method
        Monte Carlo method
        
        Parallelization
        The history parallelization by the domain decomposition
        
        Required language and library
        FORTRAN77, Fortran90, MPI
        
        Status of code for public release
        Source code is available through ISLIM download site. 
        
        Maximum computing size in present experiences
        
        - The dose distribution in a whole human body voxel phantom
- The number of voxels: 502×234×860 voxels (whole body) using 1,024 cores of the QUEST/RICC	
Expected computing size in K computer
        
        - A whole human body voxel phantom divided by 0.3 mm cubic
- The number of voxels: 1640×890×5630 voxels
- The number of cores: 640 thousands of cores.
 
    
   	
       
      Figure 1. The dose distributions of a human body voxel phantom for the assumed the 149 MeV/u carbon beam injected to lung. The three figures show the contribution from all particles (left), from ions (center), and from neutrons (right).
      What does the code enable?
         
         - The precise and exact spatial dose distribution for even much non-uniform portion
- The beam wise contribution by ion species, neutron, and photon, which are useful to evaluate biological effects and the secondary cancer's risk analysis.
 
    
    
    
              
  
                                
    
    
     
	
        ID:O-3
        Principal developer
        Kohei OKITA, Nihon University
        
        General description
		
        
        The ultrasound propagation simulation for the cancer treatment using High-Intensity Focused Ultrasound (HIFU)
        
        Computational model
        Basic equations of the ultrasound propagation for multicomponent media are solved using the n-th order spatial and the second-order time difference schemes (Okita et al., Int. J. Numer. Meth. Fluids 2011; 65:43–66). 
    
        Computational method
        Finite Difference Time Domain (FDTD) method
        
        Parallelization
        The hybrid parallelization by the domain decomposition
        
        Required language and library
        Fortran90, C++, MPI, OpenMP, SPHERE
        
        Status of code for public release
        Source code is available through ISLIM download site
        
        Maximum computing size in present experiences
        
        - The number of meshes: 1400×1200×1200(2,016,000,000 nodes)
- The number of cores: 256 to 8192 cores
- Required memory/disk storage: 484 GB/1.5 TB
Expected computing size in K computer
        
        - The number of meshes: 1.28 to 4.32 trillion nodes
- Required memory/disk storage: 31-103 TB/90-300 TB (640 thousands of cores)
 
    
   	
	  	 
		Figure 1. HIFU for a brain cancer through a skull using an array transducer
      	What does the code enable?
         
         - Prediction of the treatment region using the simulation with the human body model derived from medical images
- Simulation assisted focus-control HIFU for the deep cancer on which ultrasounds can’t correctly focus
- Support of the development of HIFU device and the safety evaluation of the device in clinical trials for the approval.
- Pre-operative planning for the minimally-invasive HIFU therapy using the simulation of an individual body.
 
    
    
    
    
  
                                    
    
    
     
	
        ID:O-4
        Principal developer
        Toshiaki HISADA, Professor, University of Tokyo
        General description
		
        
        The heart muscle cell models including internal structure (microscopic model) are distributed to each finite element of the heart model (macroscopic model). Both the models are coupled and solved simultaneously using the homogenization method.
        
        Computational model
        The finite element methods
        
    
        Computational method
        The sparse matrix solvers by both iterative and direct methods
        
        Parallelization
        The hybrid parallelization / the flat MPI parallelization
        
        Required language and library
        FORTRAN 90, MPI, Open MP
        
        Status of code for public release
        Not released for public.
        
        Maximum computing size in present experiences
        
        - About 26,000 DOFs/cell x 8000 cells
- The biventricular macroscopic model with more than 8,000 elements
- A 8,000-core parallelization on a x86 cluster
- Required memory/disk storage: 4GB x 1000/10-100 GB
Expected computing size in K computer
        
        - About 200,000 DOFs/cell x 640,000 cells
- A whole heart model with more than 640,000 elements
- Parallelization for 640,000 cores
- Required memory/disk storage: 16GB x 80,000/3GB x 80,000.
 
    
   	
       
		Figure 1. UT-Heart simulates the heart beats and blood ejection from microscopic events.
      What does the code enable?
         
         - Multiscale simulation of a finely meshed whole heart model for practical use will be enabled with almost ideal scalability
- Validation based on clinical data is ongoing from various angles at the University of Tokyo Hospital towards clinical applicatio
- The macroscopic heart simulator was already used for the design of Implantable Cardioverter Defibrillator and brought a breakthrough
- Relationship between the microscopic abnormality regarding functional proteins or ion channels in a cell and the macroscopic heart disease such as hypertrophic cardiomyopathy or long QT syndrome will be understood.