glebstepanov1992
BAN USER 0of 0 votes
AnswersArrayList list = new ArrayList();
 glebstepanov1992 in Russia for Yandex
what would you improve in this code? Report Duplicate  Flag  PURGE
Developer Program Engineer  0of 0 votes
AnswersHow to effectively implement and index for facet filtering?
 glebstepanov1992 in United States Report Duplicate  Flag  PURGE
what if you'll have new types of pages?
 glebstepanov1992 February 11, 2014public class ConstructTreeFromPreorder {
static Node[] preOrderArray = new Node[6];
static int count = 0;
public static Node constructTree(Node[] preOrder) {
Stack<Node> stack = new Stack<Node>();
Node root = preOrder[0];
Node tmp = null;
stack.push(root);
for(int i = 1;i < preOrder.length;i++) {
tmp = null;
while (!stack.empty() && stack.peek().value < preOrder[i].value) {
tmp = stack.pop();
}
if(tmp != null) {
tmp.right = preOrder[i];
stack.push(preOrder[i]);
} else {
if(!stack.empty()) {
stack.peek().left = preOrder[i];
stack.push(stack.peek().left);
}
}
}
return root;
}
}

glebstepanov1992
February 11, 2014 We hav n bits. in that case you can fit (2^n)  1 numbers, but in case of signed numbers you use one bit for sign. So you have n  1 bits. That where formula comes from.
 glebstepanov1992 February 10, 2014it depends on whether we use signed or unsigned numbers.
 glebstepanov1992 February 10, 20142^(n  1)  1
 glebstepanov1992 February 10, 2014Example is not correct for 24 largest number, which product gives 24 is 83.
 glebstepanov1992 February 05, 2014Let assume that we have sort all intervals and build covered intervals. Addition is simple.
find all intervals which parts lying between this interval begin and end. And join them
We can place all starts and ends of intervals on Xaxis, beg is 1 and end is 1. Sort them and find lenght of interval where sum of ends and begs bigger than 0.
But it is not and incremental solution.
public class ReplaceBST {
private static List<Integer> list = new LinkedList<Integer>();
private static List<Node> nodeList = new LinkedList<Node>();
public static void replace(Node root) {
if(root == null) return;
replace(root.left);
list.add(root.value);
nodeList.add(root);
replace(root.right);
}
public static void inOrderTreeWalk(Node root) {
if(root == null) return;
inOrderTreeWalk(root.left);
System.out.println(root.value);
inOrderTreeWalk(root.right);
}
public static void main(String[] args) {
Node node1 = new Node(1);
Node node2 = new Node(2);
Node node3 = new Node(3);
Node node4 = new Node(4);
Node node5 = new Node(5);
Node node6 = new Node(6);
Node node7 = new Node(7);
node4.left = node2;
node2.left = node1;
node1.right = node3;
node4.right = node6;
node6.left = node5;
node6.right = node7;
replace(node4);
int cumul = 0;
for(int i = list.size()  1;i >= 0;i) {
nodeList.get(i).value = cumul;
cumul += list.get(i);
}
inOrderTreeWalk(node4);
}
}

glebstepanov1992
February 03, 2014 Please,clarify the question.
 glebstepanov1992 February 01, 2014Nice,but this solution doesnt preserve relative order of elements.
 glebstepanov1992 February 01, 2014Backtracking?
 glebstepanov1992 January 30, 2014We should create MinHeap containing first elements of chunk. Each time we extract min from the heap and push it to the output. And add new element from head of chunck, which element was pushed to output.
 glebstepanov1992 January 30, 2014log2 (x) = logy (x) / logy (2) use built in log
 glebstepanov1992 January 30, 2014this example doesnt work 2,3,4,2,6,5 expected 2, actual 1
 glebstepanov1992 January 30, 2014it seems correct, but i cant get why it's work.
 glebstepanov1992 January 30, 2014public static int get45(int n) {
int mask = 1;
int sum = 0;
int k = 0;
do {
sum += k * (mask << k);
k++;
}
while (n > sum + k * (mask << k));
int remainder = n  sum;
int pos = remainder / k;
int mod = remainder % k;
int tmp = 0;
for(int i = 0;i < pos;i++) {
tmp++;
}
if((tmp & (mask << (k  mod  1))) != 0) {
return 5;
} else {
return 4;
}
}

glebstepanov1992
January 29, 2014 1. Find all good nodes from the begging.( BFS or DFS)
2. Then Find all strong connected components. From those node, who arent good.
Answer uis a number of SCC.
i think that we need strong connected components, because if we want to make set of nodes good, they must be interconnected. We have to choose such node , that will be connected to all other in set.
 glebstepanov1992 January 29, 2014Yep,you are right. but if they are not parallel to axis, i have no idea((
 glebstepanov1992 January 28, 2014Google prefix search, it is distinct problem in search. Also trie will be ok. But what will you do with typos and request with a few words?
 glebstepanov1992 January 28, 2014Yep,but your solution needs O(n^2) where n is a number of rectangler. My solution, which uses sorting do detect intervals intersection needs O(NlogN).
 glebstepanov1992 January 28, 2014You need to project them on axis X and form massiv on begins end ends of their sides. Then you do the same on Y axis. After that we consiver all rectangles that fits int some interval of another interval. And check wther those rects are intersect.
 glebstepanov1992 January 28, 2014DP solution. I would commit code, but write a formula.
if(s[i] == s[j])
T[i,j] = T[i + 1,j  1] + 1
else
T[i,j] = max(T[i + 1,j],T[i,j  1])
result T[0,n] then we need to find position of i and j where result occurs.
Triangle is a simple cycle. We can find all cycles in O(n) with DFS. All we need is to simply find cycles and print those lenght are 3.
 glebstepanov1992 January 24, 2014what kind of binary tree, just simple BT or BST?
 glebstepanov1992 January 21, 2014The route it is when a can be reached from b and visa versa? So in that case you can find out strongly connected components  substets of the graph where each node u and v are reacheble one from another. And simply check wheter two vertexes belong to tha same SCC.
 glebstepanov1992 January 21, 2014Actually this problem can be formed as serialization problem. We need some function to check object identity. We can use Java identityHashCode and put all node of graph to map. Where key if this identityHash.
 glebstepanov1992 January 20, 2014I can propose NlogN solution. Place all begins and ends of intervals to array, mark beg as +1 end as 1. Sort them by time component. Then iterate over and accumulate ones. Interval with maximum sum will be interval of max overlap.
 glebstepanov1992 January 20, 2014We make hash map, from sorted word letters. When we recieve a request with word, we sort it and return all words from appropriate bucket. Also hash table can be distributed. We assign to node some interval from aaaaaa to xxxxxx, for instance. Each node is responsible for continious interval of sorted in lexicographicsl order strings. So we can very fast make decision which node shoud proceed request.
 glebstepanov1992 January 17, 2014if i've understood question well, we have some kind network planing. So we need to know which parts of job can be done in parallel. We can do BFS on the graph and mark vertexes. Those vertexes who are on the same level  can be proceed in parallel. But how to find distinct branches, who can be proceed independently  i have no ideas.
 glebstepanov1992 January 17, 2014class Stack<T>{
private Queue<T> q1 = new LinkedList();
private Queue<T> q2 = new LinkedList();
public void push(T elem){
q1.offer(elem);
}
public T pop(){
while(q1.size() > 1) {
q2.offer(q1.poll());
}
T result = q1.poll();
while(!q2.isEmpty()) {
q1.offer(q2.poll);
}
return result;
}
}

glebstepanov1992
January 17, 2014 If two vertexes are connected in Directed graph, it means that they belongs to the same Strongly connected component. You can find all SCC by DFS in O(n).
Traverse graph by dfs and save enter in(v) and exit time out(v) for each of them. Then invert all edges(actually transpose adjecency matrix) and make dfs again, but when you in vertex v, you should to choose first those vertex that has maximum out time. So all distinct trees of dfs will be strongly connected components.
So you have preprocess the graph, so then all you need it is just to check whether both vertexes belong to the same SCC, it can be done in O(1).
For MRU do the same as LRU, but extract element not from tail of list, but from the head.
 glebstepanov1992 January 14, 2014The leftmost node of right subtree
 glebstepanov1992 January 13, 2014Use wave algorithm
 glebstepanov1992 January 11, 2014what if continous aaa... array will be more than 256?
 glebstepanov1992 January 11, 2014Filter Lock, Peterson lock etc?
 glebstepanov1992 January 11, 2014As it was mentioned above  it is a simple graph problem. You need to find strongly connected components of the graph. Subsets of vertexes, where each u is reachable from each v, both belongs to the subset. This can be solved yousing double DFS.
vector < vector<int> > g, gr;
vector<char> used;
vector<int> order, component;
void dfs1 (int v) {
used[v] = true;
for (size_t i=0; i<g[v].size(); ++i)
if (!used[ g[v][i] ])
dfs1 (g[v][i]);
order.push_back (v);
}
void dfs2 (int v) {
used[v] = true;
component.push_back (v);
for (size_t i=0; i<gr[v].size(); ++i)
if (!used[ gr[v][i] ])
dfs2 (gr[v][i]);
}
int main() {
int n;
read n from file
for (;;) {
int a, b;
read from file edge
g[a].push_back (b);
gr[b].push_back (a);
}
used.assign (n, false);
for (int i=0; i<n; ++i)
if (!used[i])
dfs1 (i);
used.assign (n, false);
for (int i=0; i<n; ++i) {
int v = order[n1i];
if (!used[v]) {
dfs2 (v);
output component
component.clear();
}
}
}

glebstepanov1992
January 04, 2014 What lenght sequence could be? First of all idea is that addition and multiplication are associative , also a * b == b * a. If you've got only A and M you could compute X * count A mod Z and Y ^ countM mod Z very fast. But R brokes it over. So you can do this fast computation of addition on multiplication between R operations. Example
AMMARAARMAMAAA
transorm to
first block R second block R third block
R destroys concurrency, you need to compute all blocks sequentially. But inside of them addition and multiplication are independent.
Idea is simple, we will solve it like maximum sub array problem. All you need it is to change 1 to 1 and 0 to 1. So maximum subarray will be the answer . Then you build up array where c[i] is sum from 0 to i. Then you iterate over this array, for each j position answer will be cumul[j]  cumul[k], where k is a position of minimum element from left to i. We need to know this minimum and update it when it changes. Answer  pair of k and j, which have produced maximum difference of cumul[j]  cumul[k].
def cumulative(a):
output = [a[0]]
for i in range(1,len(a)):
output.append(output[i  1] + a[i])
return output
def prepare(a):
b = a
for i in range(0,len(a)):
if b[i] == 1:
b[i] = 1
else:
b[i] = 1
return b
def sub_array(a):
cumul = cumulative(a)
a = prepare(a)
left = 0 #Current left minimum position
left_global = 0#Global left minimum position
right = 0# right side of maximal interval
left_min = 999999#left min value
global_max = 999999#global max value
right = 0
for r in range(0,len(a)):
#if current sum if bigger then previous
if cumul[r]  left_min > global_max:
global_max = cumul[r]  left_min
right = r
left_global = left + 1
#if current element cumulative sum if bigger than prevous minimum
if cumul[r] < left_min:
left = r
left_min = cumul[r]
return (left_global,right)

glebstepanov1992
January 04, 2014 All you need it is to check that each column in adjacency matrix has exactly one non zero element.
 glebstepanov1992 January 03, 2014Minimal spanning tree is a subset of edges with minimal total weight, that preserve reachability of all graph vertexes. About algo  google it :)
 glebstepanov1992 January 03, 2014Actually you dont even need to implement nothing for LRU cache, read java doc for LinkedMashMap if you create it with three parametres
new LinkedHashMap(capacity,loadFactor,accessOrder)
Where accessOrder == true, then map will behave like LRU cache with established capacity.
 glebstepanov1992 January 03, 2014Please, could you clarify question about shortener.
 glebstepanov1992 January 02, 2014Actually ArrayList based on arrays, so it is almost the same.
Pros of arrays  fast random access, because you dont need to iterate over links. But in stack/queue it is doesnt matter. Pros of this approach  your queue or stack can become quite big, so it can cause promotion failure in from young ot tenured. Also when your array will become fullilled youl'll have long operation of memory allocation and replacing your data. in case of big arrays  big risk of OutOfMemoryError.
LinkedList pros  very fast append both to head and tail, low risj of outof memory error and there is no need to allocate big amounts of memory. Pros memory defragmentation and long garbage collecting. GC have to raverse all references(there are a lot of them in list) to be sure that all nodes are reachable.
public static char[] removeDups(char []dups) {
int i = 0;
for(int j = 1; j < dups.length;j++) {
boolean flag = false;
for(int k = 0;k <= i; k++) {
if(dups[k] == dups[j]) {
flag = true;
break;
}
}
if(flag == false) {
i++;
dups[i] = dups[j];
}
}
return Arrays.copyOf(dups,i + 1);
}

glebstepanov1992
December 24, 2013 Answer  implement garbage collection by reachability like in JVM. Make BFD or DFS from stack by the references, mark objects. All unmarcked objects are unreachable so your compaction gc will be able to collect them and reuse this space. Is it enough?
Maybe the question was to design datastructure for heap(PermgGen,Young,Tenured,Old,Eden etc.)?
i dont think so, your object can be used not often, but still reachable , when some object that was recently used becomes unreachable very fast.
 glebstepanov1992 December 24, 2013Open Chat in New Window
Remove dups from string and then user simple algo which based on binary representation on subset.
 glebstepanov1992 February 11, 2014