For the FENCE problem, When using multimap, i am getting TLE for only N,M <= 1000, original constraints are ok. If i use set<pair<int,int>>, all subtasks are cleared. What is the difference?
Actually when you use multiset.erase(some_value) all instances of this value are removed from the multiset.
For example, if your multiset contains {1,2,4,4,4,5} and you say multiset.erase(4), your new multiset would be {1,2,5}
If you want to remove only one copy of an element, pass an iterator into the erase function.
@sidhant007@sapient@ankurkayal@maverick16
For solving SUBPRNJL using PBDS , as given in hint 2, for each subarray that we are forming from the array , are we inserting all its elements into the PBDS first?
If we do this then , since each insertion operation is O(log\,n) , then isnt this PBDS formation O(n\,logn) ?
Then the overall algorithm would become O(n^{3}logn) , since O(n^2) for generating all the subarrays.
Wont that exceed the time limits?
Where is my understanding faulty? Any help is appreciated.
It(Chef of the year) can easily be done using two unordered maps, one for country and the other one for chefs. Sort them first by frequency and lexicograhically if two elements have the same freq
Tried SAVKONO using multisets! #include <bits/stdc++.h>
using namespace std; #define endl ‘\n’ #define int long long int #definefast ios_base::sync_with_stdio(false); cin.tie(0); cout.tie(0)
void solve(vector &arr, int n, int z)
{
multiset s;
for(auto x : arr) s.insert(x);
int ans = 0;
while(z>0 and !s.empty())
{
int power = *s.rbegin();
z -= power;
s.erase(s.find(power));
if((power/2) > 0) s.insert(power>>1);
ans++;
}
if(z<=0) cout << ans << endl;
else cout << “Evacuate” << endl;
return;
}
The PBDS formation isn’t exactly O(nlogn). Suppose you have the PBDS for subarray a[1,2,3,4,5,6], then to obtain the PBDS for subarray a[1,2,3,4,5,6,7] you just need to insert a[7]. So instead of O(nlogn) for each subarray, a simple O(1) transition works fine for some.
So if you consider subarrays starting at a particular index you will have n distinct starting positions and almost n subarrays for each starting position. In O(nlogn) you can make the PBDS for all these n subarrays for a particular starting position. hence the complexity O(n* nlogn)
#include <bits/stdc++.h>
using namespace std;
typedef long long int ll;
int main() {
// cout<<“GfG!”;
ll n ,m;
cin>>n>>m;
ll a[n],b[m];
map <ll,ll> a1,b1;
for(ll i=0;i<n;i++)
{
cin>>a[i];