a.__code__.co_code
b'd\x01S\x00'
this doesn’t work completely because the decorator is also included in the source..
cache_disk (base_file, rm_cache=False, verbose=False)
Decorator to cache function output to disk
this time is the first time so not from the cache
CPU times: user 2.98 ms, sys: 122 µs, total: 3.1 ms
Wall time: 1 s
3
now is much faster beacuse of the cache
CPU times: user 4 µs, sys: 0 ns, total: 4 µs
Wall time: 7.15 µs
3
adding comments change the hash, so the function is still cached
CPU times: user 1.49 ms, sys: 192 µs, total: 1.68 ms
Wall time: 1 s
3
CPU times: user 8 µs, sys: 1 µs, total: 9 µs
Wall time: 12.9 µs
3
reset_seed (seed=27)
test_close (a, b, eps=1e-05)
test
that a
is within eps
of b
make a standard scaler that can also inverse transfor standard deviations. see Standardizer
for details of implementation
array([[ 0.07263978, 0.63279488, -0.9975139 , 0.50899177, 0.15537652,
1.45555506, 1.56629646, -1.60237369, 1.51674974, 1.29584745],
[ 1.58579521, 0.83086419, -0.68281902, 0.51578245, -0.62395756,
-1.19720248, -0.43000476, 1.1539719 , -0.74724819, -0.85525414],
[-1.05809926, -1.69049694, 0.0895118 , -1.72684476, -1.08418417,
0.32617669, -1.16657374, 0.2345773 , 0.26525847, 0.64349108],
[-0.60033573, 0.22683787, 1.59082112, 0.70207053, 1.55276521,
-0.58452927, 0.03028204, 0.21382449, -1.03476002, -1.08408439]])
array([0.40358703, 0.6758362 , 0.77934606, 0.70748673, 0.34417949,
0.62067044, 0.48500116, 0.54921643, 0.34604713, 0.3660338 ])
array([0.30471427, 0.21926148, 0.04405831, 0.31536161, 0.25229864,
0.24649441, 0.26061043, 0.21187396, 0.26093989, 0.22927816])
StandardScaler.inverse_transform_std (x_std)
Details | |
---|---|
x_std | standard deviations |
array2df (x:torch.Tensor, row_names:Optional[Collection[str]]=None, col_names:Optional[Collection[str]]=None, row_var:str='')
Type | Default | Details | |
---|---|---|---|
x | Tensor | 2d tensor | |
row_names | Optional | None | names for the row |
col_names | Optional | None | names for the columns |
row_var | str | name of the first column (the one with row names). This should describe the values of row_name |
array([[[0.96646567, 0.58332229, 0.09242191], [0.0136295 , 0.83693011, 0.9147879 ], [0.70458626, 0.3870066 , 0.7056939 ]], [[0.92331116, 0.28815289, 0.68401985], [0.5202925 , 0.87736578, 0.92388931], [0.48923016, 0.59621396, 0.26427542]]])
array([[[0.96646567, 0.58332229, 0.09242191],
[0.0136295 , 0.83693011, 0.9147879 ],
[0.70458626, 0.3870066 , 0.7056939 ]],
[[0.92331116, 0.28815289, 0.68401985],
[0.5202925 , 0.87736578, 0.92388931],
[0.48923016, 0.59621396, 0.26427542]]])
retrieve_names (*args)
Tries to retrieve the argument name in the call frame, if there are multiple matches name is ’’
maybe_retrieve_callers_name (args)
Tries to retrieve the argument name in the call frame, if there are multiple matches name is ’’
['x', 'y']
show_as_row (*os, names:Iterable[str]=None, **kwargs)
Shows a interable of tensors on a row
row_items (**kwargs)
pretty_repr (o)
b
array([[[0.96646567, 0.58332229, 0.09242191], [0.0136295 , 0.83693011, 0.9147879 ], [0.70458626, 0.3870066 , 0.7056939 ]], [[0.92331116, 0.28815289, 0.68401985], [0.5202925 , 0.87736578, 0.92388931], [0.48923016, 0.59621396, 0.26427542]]])
a
array([[[0.96646567, 0.58332229, 0.09242191], [0.0136295 , 0.83693011, 0.9147879 ], [0.70458626, 0.3870066 , 0.7056939 ]], [[0.92331116, 0.28815289, 0.68401985], [0.5202925 , 0.87736578, 0.92388931], [0.48923016, 0.59621396, 0.26427542]]])
a
array([[[0.96646567, 0.58332229, 0.09242191], [0.0136295 , 0.83693011, 0.9147879 ], [0.70458626, 0.3870066 , 0.7056939 ]], [[0.92331116, 0.28815289, 0.68401985], [0.5202925 , 0.87736578, 0.92388931], [0.48923016, 0.59621396, 0.26427542]]])
b
array([[[0.96646567, 0.58332229, 0.09242191], [0.0136295 , 0.83693011, 0.9147879 ], [0.70458626, 0.3870066 , 0.7056939 ]], [[0.92331116, 0.28815289, 0.68401985], [0.5202925 , 0.87736578, 0.92388931], [0.48923016, 0.59621396, 0.26427542]]])
b
array([[[0.96646567, 0.58332229, 0.09242191], [0.0136295 , 0.83693011, 0.9147879 ], [0.70458626, 0.3870066 , 0.7056939 ]], [[0.92331116, 0.28815289, 0.68401985], [0.5202925 , 0.87736578, 0.92388931], [0.48923016, 0.59621396, 0.26427542]]])
c
array([[[0.96646567, 0.58332229, 0.09242191], [0.0136295 , 0.83693011, 0.9147879 ], [0.70458626, 0.3870066 , 0.7056939 ]], [[0.92331116, 0.28815289, 0.68401985], [0.5202925 , 0.87736578, 0.92388931], [0.48923016, 0.59621396, 0.26427542]]])
display_as_row (dfs:dict[str,pandas.core.frame.DataFrame], title='', hide_idx=True, styler=<function _style_df>)
display multiple dataframes in the same row
row_dfs (dfs:dict[str,pandas.core.frame.DataFrame], title='', hide_idx=True, styler=<function _style_df>)
inspired from source: https://github.com/pytorch/pytorch/pull/9281
eye_like (x:torch.Tensor)
Return a tensor with same batch size as x, that has a nxn eye matrix in each sample in batch.
Args: x: tensor of shape (B, n, m) or (n,m)
Returns: tensor of shape (B, n, m) or (n,m) that has the same dtype and device as x.
tensor([[[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]],
[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]],
[[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]],
[[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]]]])
tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
is_diagonal (x:torch.Tensor)
Check that tensor is diagonal respect to the last 2 dimensions
product_dict (**kwargs)