Skip to content

Increasing the usage #10

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
sridhar-mani opened this issue Apr 2, 2025 · 13 comments
Open

Increasing the usage #10

sridhar-mani opened this issue Apr 2, 2025 · 13 comments
Assignees
Labels
enhancement New feature or request

Comments

@sridhar-mani
Copy link
Collaborator

Currently with the web worker also when we reach the mesh size of about 1000 or so we get memory issue. I am not a CFD person so i'm guessing a workaround is needed to run the simulation in the WebGPU offloading all the work load to it and only plotting the result.

@sridhar-mani
Copy link
Collaborator Author

Also instead of plotting maybe we can make a time dependent solver and try running it in a simple mesh to see how it goes.

@nikoscham
Copy link
Member

nikoscham commented Apr 2, 2025

Currently with the web worker also when we reach the mesh size of about 1000 or so we get memory issue. I am not a CFD person so i'm guessing a workaround is needed to run the simulation in the WebGPU offloading all the work load to it and only plotting the result.

Thanks for the pull request! WebGPU can be a solution. Additionally, using a more memory-efficient linear solver would help. Currently, I’m using lusolve (LU Decomposition), which isn’t ideal for large problems (mesh size ~1000, as you noted). An iterative solver (e.g. GMRES) would be a better option. However, I believe both improvements can be combined (i.e. implementing an iterative solver optimized for GPU). I can create a roadmap for such an implementation.

@nikoscham nikoscham added the enhancement New feature or request label Apr 2, 2025
@sridhar-mani
Copy link
Collaborator Author

Yes! exactly i think openfoam using such solvers which are memory efficient to achieve near accurate results. If it is possible like that we can offload chunks of the computation to seperate workers and offload asynchronously to webgpu maybe?

@nikoscham
Copy link
Member

Yes! exactly i think openfoam using such solvers which are memory efficient to achieve near accurate results. If it is possible like that we can offload chunks of the computation to seperate workers and offload asynchronously to webgpu maybe?

Yes, we should do that. I believe that using iterative solvers on the GPU is the only way to have an efficient CFD solver in JS. Also, iterative solvers can produce results as accurate as direct solvers, but they require more iterations (however they use less memory and are better for parallel execution).

@sridhar-mani
Copy link
Collaborator Author

If possible compile this into a npm library that might be good.

@nikoscham
Copy link
Member

If possible compile this into a npm library that might be good.

I'll look into this, thanks for your suggestion

@sridhar-mani
Copy link
Collaborator Author

I can help you with that if needed

@nikoscham
Copy link
Member

nikoscham commented Apr 2, 2025

Currently with the web worker

I've temporarily disabled the web worker export due to persistent CORS issues when loading Comlink from CDN. Once we resolve these cross-origin restrictions (either by bundling Comlink locally or finding a compatible CDN), we can re-enable this feature. See here: 8e478dd

@sridhar-mani
Copy link
Collaborator Author

sridhar-mani commented Apr 2, 2025

That can be solved once we use the es export as per the standard that we will follow if we make it as a library in npm. I have solved the issue locally but i think its better to use es dynamic import as that will always maintain the functionality. Is it okay? And if it is then we can restructure the src into a library.

@nikoscham
Copy link
Member

@sridhar-mani Yes, making FEAScript into an npm library would definitely help. This would address the issues with CORS. I'll need a bit of time to read up on npm packaging (I am not as experienced as you in this).
In the meantime, I've implemented a temporary fix for the CORS issues by using a local version of the Comlink library. See here 85fa913
This should work as a temporary solution until we can properly structure the project as an npm package. I am also working on reformulating the examples in order to use the Workers functionality. I am grateful for your help in the project!

@sridhar-mani
Copy link
Collaborator Author

sridhar-mani commented Apr 3, 2025

No worries. I'm working on writing a import module to import tetra meshes from gmesh. Want a clarification whether that is the best mesh format for FEA.

FYI-Saw the pr its good to use it from local. And it would solve the issue for now in the current stage.

@nikoscham
Copy link
Member

nikoscham commented Apr 9, 2025

BTW - Also made a gpu branch where fully webgpu offloading has been implemented.

Originally posted by @sridhar-mani in #5

I have implemented a simple iterative solver (Jacobi Method) - https://github.com/FEAScript/FEAScript-core/blob/main/src/methods/jacobiMethodScript.js
You can start from this one to test the GPU acceleration on the matrices operations. Apart from WebGPU, an other option is also GPU.js.

@sridhar-mani
Copy link
Collaborator Author

GPU.js is also a light weight gpu programming wrapper in js. Taichi.js is superior to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants