This reverts commit 7a1a5d206d.
We could already cause the behavior by simply setting ignore-verified
to false, aas that flag is basically an on/off switch for even
considering outdated-after or not.
So avoid the extra logic and just make the gui use the previously
existing way.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
matches what we do for (most) of such things in PVE since 7.0 there
and also what the disk management gui shows, further disks are sold
with SI units for their advertised capacity, so its more fitting
there too.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
by checking if *any* record has data, not only the first
this would prevent the chart from being shown for e.g. newly added
datastores, or for datastores after the server was offline for some time
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
only ask the name of the current NS, not the full NS path to avoid
too long input requirements on deep levels.
needs a few smaller hacks, ideally we would pull out the basic stuff
from Edit window in some EditBase window and let both, SafeDestroy
and Edit window derive from that, for better common, in sync
features.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
they all still used some odd side effects of the tree structure to
decided what record type they operated on, just move them over to the
new `ty` record.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
this not only makes the action disable/hide checks simpler, but also
prepares the view a bit for the idea of adding a new API endpoint
that returns the whole datastore content tree as structured JSON so
that it can be directly loaded into a tree store.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
that way it's easier to see on which NS one currently operates and
allows better distinguishing of root NS and some sub ns named "Root"
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
reference NS so that users get a hint where they are currently
hierarchy-wise, and clarify that we found no *accessible* snapshots,
on this level, i.e., there can be some that we just cannot see due to
only having access on lover level NS or being different owners.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
without this the store stayed active in the background and kept
updating every 3s for every datastore the ui was opened.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
verifying a single snapshot is now never skipped because of recent verify
verifying a group will now reverify after 29 days to be consistent
with the 'All OK (old)' display
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
When a user directly opened the webui with a fragment that is not
the summary, opening of the 'my settings' window fails because the
initial set of the columns field triggers a state change, which in turn
tries to trigger 'updateColumns'. That fails though, since the columns
were not even rendered yet (because we are on a different tab).
To fix this, simply return when the panel is not rendered yet.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
avoid that there's the same icon rendered twice, once clickable and
once as status. Also indicate the protection with a literal text and
by highlighting the single shield with green, if protected.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
when we trigger the first load before the panel was fully created,
there was no load mask for it (but the snapshots would "pop in" on load)
move the first reload into the 'activate' listener. this will be called
the every time a user opens the content tab of a datastore, so guard
it by a 'firstLoad' bool.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
two things wrong with the old code:
* the sort function wants -1, 0 and 1 as a return value for a<b, a==b and a>b
respectively, not a bool (which a < b returns)
* we have to sort the newest backups first, since the first reason is
'keep-last'. until now, we sorted the oldest backup first, resulting
in the older backups getting the 'keep-last' reason
reported by a user in the forum:
https://forum.proxmox.com/threads/prune-ui-and-prune-schedule-simulator-dont-match.94944/
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
we are not actually pruning the whole datastore, but only the single
group, so set that as a title
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>
since the api call always starts a real worker, we cannot have a
preview. It would also be very hard to show that for all groups in a
non-confusing way. We reuse the pbsPruneInputPanel and add the dry-run
field there conditionally.
Signed-off-by: Dominik Csapak <d.csapak@proxmox.com>