Professor Farkas and I found an error in the computation of the specific energy of the planar defect. The program computes the correct defect energy in eV but then divides by an incorrect area for the plane. The defect energy in eV of the block is found by first computing the energy of the block without the defect and then subtracting the minimum energy of the block after imposing the defect (found at the bottom of the output file fevac.out for the planar defect simulation). The energy of the block without the defect is equal to the sum of the free atoms and buffer atoms times the cohesive energy of the perfect lattice for iron (-4.28 eV if bcc). Then the area of the plane must be found. The dimensions for the plane given by the program only considers the inner block containing free atoms. The area that should be computed, however, includes the buffer area. Thus, a new area must be computed. The area is equal to the x-dimension times the z-dimenison. These dimensions must be found for the plane containing the inner block and the buffer thickness. To compute the dimensions, multiply the x and z dimensions of the inner block given by the ratio of the corresponding buffer units to the corresponding inner block units. For example, if your output file says: buffer thickness 6 . . , inner block is 5 . . , and dimensions of the inner block is: 1 . . , to compute the correct x dimension, multiply the given x-dimension 1 by (6/5). Then find the area of the plane in Angstroms. Next, divide your defect energy (in eV), found earlier, by this corrected area. The final steps are to multiply this last result by 16 to convert the units to J/m^2 and then divide by two to find the value for one plane. This final result is the specific energy of the planar defect in J/m^2. John