Hi,
I would like to run two nested DOE: one custom doe, and for each evaluation, another DOE is run:
loop on x1 with CustomDOE, where x1 values are given in a dataset:
loop on x2 with a Latin Square DOE, speicifed with a parameter space
In the end, I can get a dataset containing all the x1, x2 values.
Could you help my elaborate the script corresponding to this scenario?
Thanks!
Hi,
Here are the different steps you may need:
- build your sub scenario (DOE on the x2 variable)
- Transform your scenario into a discipline with the ScenarioAdapter
- Set the default algorithm with set_algorithm() –>for your Latin Square
- Create a DOE scenario with a single discipline (x1 variable) –> your scenario_adapter
- Execute your upper-scenario with a custom DOE. To do so, you need to take only the values in your dataset.
Do you need explicit explanations on a particular topic?
Hi,
I would generate the two DOEs A and B (NumPy arrays) and define a third DOE C (NumPy arrays) as the cartesian product of A and B. Then, create a scenario and execute it using a CustomDOE parametrized by C.
MultiLevelDOE would be a nice feature for GEMSEO, with levels: list[tuple[list[str], BaseDoeSettings]] as unique parameter.
1 Like
@SebastienBocquet as a complement to Matthias’ proposal, which I think is the right approach, you can generate the samples of a GEMSEO DOE as a np array, without any link with a scenario using the generate_samples method gemseo.algos.doe.base_doe module — GEMSEO 6.3.0 documentation , then manipulate these and reinject in a customDOE.
Best,
François
Thanks to all three, I will try the latter approach.
We briefly discussed about this post with @LoicCousin and we end up with the remark that GEMSEO could decouple the sampling versus the evaluation. GEMSEO could provide services to create sampling points, possible in a nested or multilevel manner (LHSSampler, etc…).
Then, the sampling points are given to an evaluator, which would be close to a loop.
One advantage I see in this decouling is that it may ease CPU-time optimization, since the developer handles the loop on the sampling points in which he evaluates his discipline. So above this loop, he can put a numba pragma or any other CPU-time optimization strategy like vectorization. At the moment, it seems to me that this loop is quite buried within the DOE, and thus not really accessible for CPU-time optimizations.
The generation and evaluation of the DOE are already splitted, as we already pointed out (see generate_samples and custom_doe).
We could integrate a utility fonction, that could also serve as an example, on how to manipulate multiple DOEs and mix them. Users may invent anything around this.